Text
stringlengths
45
130k
Id
stringlengths
8
8
Summary
stringlengths
55
2.67k
a generative model for parsing natural language to meaning representations in this paper we present an algorithm for learning a generative model of natural language sentences together with their formal meaning representations with hierarchical structures the model is applied to the task of mapping sentences to hierarchical representations of their underlying meaning we introduce dynamic programming techniques for efficient training and decoding in experiments we demonstrate that the model when coupled with a discriminative reranking technique achieves stateoftheart performance when tested on two publicly available corpora the generative model degrades robustly when presented with instances that are different from those seen in training this allows a notable improvement in recall compared to previous models to enable computers to understand natural human language is one of the classic goals of research in natural language processingrecently researchers have developed techniques for learning to map sentences to hierarchical representations of their underlying meaning one common approach is to learn some form of probabilistic grammar which includes a list of lexical items that models the meanings of input words and also includes rules for combining lexical meanings to analyze complete sentencesthis approach performs well but is constrained by the use of a single learned grammar that contains a fixed set of lexical entries and productionsin practice such a grammar may lack the rules required to correctly parse some of the new test examplesin this paper we develop an alternative approach that learns a model which does not make use of an explicit grammar but instead models the correspondence between sentences and their meanings with a generative processthis model is defined over hybrid trees whose nodes include both natural language words and meaning representation tokensinspired by the work of collins the generative model builds trees by recursively creating nodes at each level according to a markov processthis implicit grammar representation leads to flexible learned models that generalize wellin practice we observe that it can correctly parse a wider range of test examples than previous approachesthe generative model is learned from data that consists of sentences paired with their meaning representationshowever there is no explicit labeling of the correspondence between words and meaning tokens that is necessary for building the hybrid treesthis creates a challenging hiddenvariable learning problem that we address with the use of an insideoutside algorithmspecifically we develop a dynamic programming parsing algorithm that leads to o time complexity for inference where n is the sentence length and m is the size of meaning structurethis approach allows for efficient training and decodingin practice we observe that the learned generative models are able to assign a high score to the correct meaning for input sentences but that this correct meaning is not always the highest scoring optionto address this problem we use a simple reranking approach to select a parse from a kbest list of parsesthis pipelined approach achieves stateoftheart performance on two publicly available corporain particular the flexible generative model leads to notable improvements in recall the total percentage of sentences that are correctly parsedin section 9 we will compare performance with the three existing systems that were evaluated on the same data sets we considersilt learns deterministic rules to transform either sentences or their syntactic parse trees to meaning structureswasp is a system motivated by statistical machine translation techniquesit acquires a set of synchronous lexical entries by running the ibm alignment model and learns a loglinear model to weight parseskrisp is a discriminative approach where meaning representation structures are constructed from the natural language strings hierarchicallyit is built on top of svmstruct with string kernelsadditionally there is substantial related research that is not directly comparable to our approachsome of this work requires different levels of supervision including labeled syntactic parse trees others do not perform lexical learning finally recent work has explored learning to map sentences to lambdacalculus meaning representations we restrict our meaning representation formalism to a variable free version as presented in a training instance consists of a natural language sentence and its corresponding meaning representation structure consider the following instance taken from the geoquery corpus the nl sentence how many states do not have rivers consists of 8 words including punctuationthe mr is a hierarchical tree structure as shown in figure 1following an inorder traversal of this mr tree we can equivalently represent it with the following list of meaning representation productions each such mr production consists of three components a semantic category a function symbol which can be omitted and a list of argumentsan argument can be either a child semantic category or a constanttake production for example it has a semantic category num a function symbol count and a child semantic category state as its only argumentproduction has river as its semantic category river as the function symbol and all is a constantwe describe in this section our proposed generative model which simultaneously generates a nl sentence and an mr structurewe denote a single nl word as w a contiguous sequence of nl words as w and a complete nl sentence as w in the mr structure we denote a semantic category as m we denote a single mr production as ma or ma pα where ma is the semantic category for this production pα is the function symbol and mb mc are the child semantic categorieswe denote ma as an mr structure rooted by an mr production ma and mq an mr structure for a complete sentence rooted by an mr production mathe model generates a hybrid tree that represents a sentence w w1 w2 paired with an mr structure mq rooted by mafigure 2 shows part of a hybrid tree that is generated as followsgiven a semantic category ma we first pick an mr production ma that has the form ma pα which gives us the function symbol pα as well as the child semantic categories mb and mcnext we generate the hybrid sequence of child nodes w1 mb w2 mc which consists of nl words and semantic categoriesafter that two child mr productions mb and mc are generatedthese two productions will in turn generate other hybrid sequences and productions recursivelythis process produces a hybrid tree t whose nodes are either nl words or mr productionsgiven this tree we can recover a nl sentence w by recording the nl words visited in depthfirst traversal order and can recover an mr structure m by following a treespecific traversal order defined by the hybridpatterns we introduce belowfigure 3 gives a partial hybrid tree for the training example from section 3note that the leaves of a hybrid tree are always nl tokenswith several independence assumptions the probability of generating is defined as where arg refers to the position of the child semantic category in the argument listmotivated by collins syntactic parsing models we consider the generation process for a hybrid sequence from an mr production as a markov processgiven the assumption that each mr production has at most two semantic categories in its arguments table 1 includes the list of all possible hybrid patternsin this table m is an mr production y and z are respectively the first and second child semantic category in ms argument listthe symbol w refers to a contiguous sequence of nl words and anything inside can be optionally omittedthe last row contains hybrid patterns that reflect reordering of one productions child semantic categories during the generation processfor example consider the case that the mr production state exclude generates a hybrid sequence state1 do not state2 the hybrid pattern m ywz is associated with this generation stepfor the example hybrid tree in figure 2 we can decompose the probability for generating the hybrid sequence as follows note that unigram bigram or trigram assumptions can be made here for generating nl words and semantic categoriesfor example under a bigram assumption the second to last term can be written as p p where wk2 is the last word in w2we call such additional information that we condition on the contextnote that our generative model is different from the synchronous context free grammars in a number of waysa standard scfg produces a correspondence between a pair of trees while our model produces a single hybrid tree that represents the correspondence between a sentence and a treealso scfgs use a finite set of contextfree rewrite rules to define the model where the rules are possibly weightedin contrast we make use of the more flexible markov models at each level of the generative process which allows us to potentially produce a far wider range of possible treesthere are three categories of parameters used in the modelthe first category of parameters models the generation of new mr productions from their parent mr productions eg p the second models the generation of a hybrid sequence from an mr production eg p p the last models the selection of a hybrid pattern given an mr production eg pwe will estimate parameters from all categories with the following constraints these parameters model the mr structures and can be referred to as mit model parametersthese parameters model the emission of nl words the end symbol and child semantic categories from an mr productionwe call them emission parameters3er 0 1 for all j where r is a hybrid pattern listed in table 1these parameters model the selection of hybrid patternswe name them pattern parameterswith different context assumptions we reach different variations of the modelin particular we consider three assumptions as follows where tk is a semantic category or a nl word and mj is an mr productionin other words generation of the next nl word depends on its direct parent mr production onlysuch a unigram model may help in recall because it requires the least data to estimatemodel ii we make the following assumption where tk1 is the semantic category or nl word to the left of tk ie the previous semantic category or nl wordin other words generation of the next nl word depends on its direct parent mr production as well as the previously generated nl word or semantic category onlythis model is also referred to as bigram modelthis model may help in precision because it conditions on a larger contextmodel iii we make the following assumption we can view this model called the mixgram model as an interpolation between model i and iithis model gives us a balanced score for both precision and recallthe mr model parameters can be estimated independently from the other twothese parameters can be viewed as the language model parameters for the mr structure and can be estimated directly from the corpus by simply reading off the counts of occurrences of mr productions in mr structures over the training corpusto resolve data sparseness problem a variant of the bigram katz backoff model is employed here for smoothinglearning the remaining two categories ofparameters is more challengingin a conventional pcfg parsing task during the training phase the correct correspondence between nl words and syntactic structures is fully accessiblein other words there is a single deterministic derivation associated with each training instancetherefore model parameters can be directly estimated from the training corpus by countinghowever in our task the correct correspondence between nl words and mr structures is unknownmany possible derivations could reach the same nlmr pair where each such derivation forms a hybrid treethe hybrid tree is constructed using hidden variables and estimated from the training setan efficient insideoutside style algorithm can be used for model estimation similar to that used in as discussed nextin this section we discuss how to estimate the emission and pattern parameters with the expectation maximization algorithm by using an insideoutside dynamic programming approachdenote ni hmi wii as the ith training instance where mi and wi are the mr structure and the nl sentence of the ith instance respectivelywe also denote nv hmv wvi as an aligned pair of mr substructure and contiguous nl substring where the mr substructure rooted by mr production mv will correspond to the nl substring wvthe symbol h is used to denote a hybrid sequence and the function parent gives the unique mr substructurenl subsequence pair which can be decomposed as h parent returns the set of all possible hybrid sequences under which the pair nv can be generatedsimilarly children gives the nlmr pairs that appear directly below the hybrid sequence h in a hybrid tree and children returns the set of all possible hybrid sequences that n can be decomposed asfigure 4 gives a packed tree structure representing the relations between the entitiesthe formulas for computing inside and outside probabilities as well as the equations for updating parameters are given in figure 5we use a ckystyle parse chart for tracking the probabilitiesit is reasonable to believe that different mr productions that share identical function symbols are likely to generate nl words with similar distribution regardless of semantic categoriesfor example the inside probabilities are defined as the outside probabilities are defined as the count ci where t is a nl word or a semantic category for an instance pair ni update the pattern parameter the count ci where r is a hybrid pattern for an instance pair ni hmi wii river largest and city largest are both likely to generate the word biggestin view of this a smoothing technique is deployedwe assume half of the time words can be generated from the productions function symbol alone if it is not emptymathematically assuming ma with function symbol pa for a nl word or semantic category t we have where θe models the generation of t from an mr production or its function symbol together with the context athough the insideoutside approach already employs packed representations for dynamic programming a naive implementation of the inference algorithm will still require o time for 1 them iteration where n and m are the length of the nl sentence and the size of the mr structure respectivelythis is not very practical as in one of the corpora we look at n and m can be up to 45 and 20 respectivelyin this section we develop an efficient dynamic programming algorithm that enables the inference to run in o timethe idea is as followsinstead of treating each possible hybrid sequence as a separate rule we efficiently aggregate the already computed probability scores for hybrid sequences that share identical hybrid patternssuch aggregated scores can then be used for subsequent computationsby doing this we can effectively avoid a large amount of redundant computationsthe algorithm supports both unigram and bigram context assumptionsfor clarity and ease of presentation we primarily make the unigram assumption throughout our discussionwe use β to denote the inside probability for mvwv pair brmv wv c to denote the aggregated probabilities for the mr substructure mv to generate all possible hybrid sequences based on wv with pattern r that covers its cth child onlyin addition we use w to denote a subsequence of w with start index i and end index j we also use βrmv wv to denote the aggregated inside probability for the pair hmv wvi if the hybrid pattern is restricted to r onlyby definition we have relations between βr and br can also be establishedfor example if mv has one child semantic category we have βmwymv wv bmwymv wv 1 for the case when mv has two child semantic categories as arguments we have for example note that there also exist relations amongst b terms for more efficient computation for example analogous but more complex formulas are used for computing the outside probabilitiesupdating of parameters can be incorporated into the computation of outside probabilities efficientlyin the decoding phase we want to find the optimal mr structure m given a new nl sentence w where t is a possible hybrid tree associated with the mw pairhowever it is expensive to compute the summation over all possible hybrid treeswe therefore find the most likely hybrid tree instead parg max max we have implemented an exact topk decoding algorithm for this taskdynamic programming techniques similar to those discussed in section 6 can also be applied when retrieving the top candidateswe also find the viterbi hybrid tree given a nlmr pair which can be done in an analogous waythis tree will be useful for reranking8 reranking and filtering of predictions due to the various independence assumptions we have made the model lacks the ability to express some long range dependencieswe therefore postprocess the best candidate predictions with a discriminative reranking algorithmthe averaged perceptron algorithm has previously been applied to various nlp tasks for discriminative rerankingthe detailed algorithm can be found in in this section we extend the conventional averaged perceptron by introducing an explicit separating plane on the feature spaceour reranking approach requires three components during training a gen function that defines for each nl sentence a set of candidate hybrid trees a single correct reference hybrid tree for each training instance and a feature function can be assigned to each candidate hybrid tree t given a new instance the hybrid tree with the highest score is then picked by the algorithm as the outputin this task the gen function is defined as the output hybrid trees of the topk decoding algorithm given the learned model parametersthe correct reference hybrid tree is determined by running the viterbi algorithm on each training nlmr pairthe feature function is discussed in section 82while conventional perceptron algorithms usually optimize the accuracy measure we extend it to allow optimization of the fmeasure by introducing an explicit separating plane on the feature space that rejects certain predictions even when they score highestthe idea is to find a threshold b after w is learned such that a prediction with score below b gets rejectedwe pick the threshold that leads to the optimal fmeasure when applied to the training setwe list in table 2 the set of features we usedexamples are given based on the hybrid tree in figure 3some of the them are adapted from for a natural language parsing taskfeatures 15 are indicator functions while feature 6 is real valuedfeatures that do not appear more than once in the training set are discardedour evaluations were performed on two corpora geoquery and robocupthe geoquery corpus contains mr defined by a prologbased language used in querying a database on yous geographythe robocup corpus contains mr defined by a coaching language used in a robot coaching competitionthere are in total 880 and 300 instances for the two corpora respectivelystandard 10fold cross validations were performed and the microaveraged results are presented in this sectionto make our system directly comparable to previous systems all our experiments were based on identical training and test data splits of both corpora as reported in the experiments of wong and mooney given a training set we first run a variant of ibm alignment model 1 for 100 iterations and then initialize model i with the learned parameter valuesthis ibm model is a wordtoword alignment model that does not model word order so we do not have to linearize the hierarchical mr structuregiven this initialization we train model i for 100 them iterations and use the learned parameters to initialize model ii which is trained for another 100 them iterationsmodel iii is simply an interpolation of the above two modelsas for the reranking phase we initialize the weight vector with the zero vector 0 and run the averaged perceptron algorithm for 10 iterationsfollowing wong and other previous work we report performance in terms of precision recall and fscore again following wong we define the correct output mr structure as followsfor the geoquery corpus an mr structure is considered correct if and only if it retrieves identical results as the reference mr structure when both are issued as queries to the underlying prolog databasefor the robocup corpus an mr structure is considered correct if and only if it has the same string representation as the reference mr structure up to reordering of children of mr productions whose function symbols are commutative such as and or etcwe evaluated the three models with and without rerankingthe results are presented in table 3comparing model i and model ii we noticed that for both corpora model i in general achieves better recall while model ii achieves better precisionthis observation conforms to our earlier expectationsmodel iii as an interpolation of the above two models achieves a much better fmeasure on geoquery corpushowever it is shown to be less effective on robocup corpuswe noticed that compared to the geoquery corpus robocup corpus contains longer sentences larger mr structures and a significant amount of noncompositionalitythese factors combine to present a challenging problem for parsing with the generative modelinterestingly although model iii fails to produce better best predictions for this corpus we found that its topk list contains a relatively larger number of correct predictions than model i or model iithis indicates the possibility of enhancing the performance with rerankingthe reranking approach is shown to be quite effectivewe observe a consistent improvement in both precision and fmeasure after employing the reranking phase for each modelamong all the previous models silt wasp and krisp are directly comparable to our modelthey required the same amount of supervision as our system and were evaluated on the same corporawe compare our model with these models in table 4 where the performance scores for the previous systems are taken from for geoquery corpus our model performs substantially better than all the three previous models with a notable improvement in the recall scorein fact if we look at the recall scores alone our bestperforming model achieves a 67 and 98 absolute improvement over two other stateoftheart models wasp and krisp respectivelythis indicates that overall our model is able to handle over 25 of the inputs that could not be handled by previous systemson the other hand in terms of fmeasure we gain a 41 absolute improvement over krisp which leads to an error reduction rate of 22on the robocup corpus our models performance is also ranked the highest1as a generic model that requires minimal assumptions on the natural language our model is natural language independent and is able to handle various other natural languages than englishto validate this point we evaluated our system on a subset of the geoquery corpus consisting of 250 instances with four different nl annotationsas we can see from table 5 our model is able to achieve performance comparable to wasp as reported by wong ments on this paperthe research is partially supported by arf grant r252000240112our model is generic which requires no domaindependent knowledge and should be applicable to a wide range of different domainslike all research in this area the ultimate goal is to scale to more complex opendomain language understanding problemsin future we would like to create a larger corpus in another domain with multiple natural language annotations to further evaluate the scalability and portability of our approachwe presented a new generative model that simultaneously produces both nl sentences and their corresponding mr structuresthe model can be effectively applied to the task of transforming nl sentences to their mr structureswe also developed a new dynamic programming algorithm for efficient training and decodingwe demonstrated that this approach augmented with a discriminative reranking technique achieves stateoftheart performance when tested on standard benchmark corporain future we would like to extend the current model to have a wider range of support of mr formalisms such as the one with lambdacalculus supportwe are also interested in investigating ways to apply the generative model to the inverse task generation of a nl sentence that explains a given mr structure
D08-1082
a generative model for parsing natural language to meaning representationsin this paper we present an algorithm for learning a generative model of natural language sentences together with their formal meaning representations with hierarchical structuresthe model is applied to the task of mapping sentences to hierarchical representations of their underlying meaningwe introduce dynamic programming techniques for efficient training and decodingin experiments we demonstrate that the model when coupled with a discriminative reranking technique achieves stateoftheart performance when tested on two publicly available corporathe generative model degrades robustly when presented with instances that are different from those seen in trainingthis allows a notable improvement in recall compared to previous modelsour hybrid tree model use a tree transformation based approachwe present a joint generative process that produces a hybrid tree structure containing words syntactic structures and meaning representations where the meaning representations are in a variablefree treestructured formwe propose 3 models for generative semantic parsing unigram bigram and mix gram
learning with compositional semantics as structural inference for subsentential sentiment analysis determining the polarity of a sentimentbearing expression requires more than a simple bagofwords approach in particular words or constituents within the expression can interact with each other to yield a particular overall polarity in this paper we view such interactions in light of composiand present a novel learningbased approach that incorporates structural inference motivated by compositional semantics into the learning procedure our experiments show that simple heuristics based on compositional semantics can perform better than learningbased methods that do not incorporate compositional semantics but a method that integrates compositional semantics into learning performs better than all other alternatives we also find that contentword negators not widely employed in previous work play an important role in determining expressionlevel polarity finally in contrast to conventional wisdom we find that expressionlevel classification accuracy additional potentially disambiguating context is considered determining the polarity of sentimentbearing expressions at or below the sentence level requires more than a simple bagofwords approachone of the difficulties is that words or constituents within the expression can interact with each other to yield a particular overall polarityto facilitate our discussion consider the following examples in the first example doubt in isolation carries a negative sentiment but the overall polarity of the sentence is positive because there is a negator not which flips the polarityin the second example both eliminated and doubt carry negative sentiment in isolation but the overall polarity of the sentence is positive because eliminated acts as a negator for its argument doubtin the last example there are effectively two negators not and eliminated which reverse the polarity of doubt twice resulting in the negative polarity for the overall sentencethese examples demonstrate that words or constituents interact with each other to yield the expressionlevel polarityand a system that simply takes the majority vote of the polarity of individual words will not work well on the above examplesindeed much of the previous learningbased research on this topic tries to incorporate salient interactions by encoding them as featuresone approach includes features based on contextual valence shifters1 which are words that affect the polarity or intensity of sentiment over neighboring text spans wilson et al shaikh et alanother approach encodes frequent subsentential patterns as features these might indirectly capture some of the subsentential interactions that affect polarityhowever both types of approach are based on learning models with a flat bagoffeatures some structural information can be encoded as higher order features but the final representation of the input is still a flat feature vector that is inherently too limited to adequately reflect the complex structural nature of the underlying subsentential interactions moilanen and pulman on the other hand handle the structural nature of the interactions more directly using the ideas from compositional semantics dowty et al in short the principle of compositionality states that the meaning of a compound expression is a function of the meaning of its parts and of the syntactic rules by which they are combined dowty et al and moilanen and pulman develop a collection of composition rules to assign a sentiment value to individual expressions clauses or sentencestheir approach can be viewed as a type of structural inference but their handwritten rules have not been empirically compared to learningbased alternatives which one might expect to be more effective in handling some aspects of the polarity classification taskin this paper we begin to close the gap between learningbased approaches to expressionlevel polarity classification and those founded on compositional semantics we present a novel learningbased approach that incorporates structural inference motivated by compositional semantics into the learning procedureadopting the view point of compositional semantics our working assumption is that the polarity of a sentimentbearing expression can be determined in a twostep process assess the polarities of the constituents of the expression and then apply a relatively simple set of inference rules to combine them recursivelyrather than a rigid application of handwritten compositional inference rules however we hypothesize that an ideal solution to the expressionlevel polarity classification task will be a method that can exploit ideas from compositional semantics while providing the flexibility needed to handle the complexities of realworld natural language exceptions unknown words missing semantic features and inaccurate or missing rulesthe learningbased approach proposed in this paper takes a first step in this directionin addition to the novel learning approach this paper presents new insights for contentword negators which we define as content words that can negate the polarity of neighboring words or constituentsunlike functionword negators such as not or never contentword negators have been recognized and utilized less actively in previous work wilson et al and moilanen and pulman 2 in our experiments we compare learning and nonlearningbased approaches to expressionlevel polarity classification with and without compositional semantics and find that simple heuristics based on compositional semantics outperform other reasonable heuristics that do not incorporate compositional semantics they can also perform better than simple learningbased methods that do not incorporate compositional semantics combining learning with the heuristic rules based on compositional semantics further improves the performance contentword negators play an important role in determining the expressionlevel polarity and somewhat surprisingly we find that expressionlevel classification accuracy uniformly decreases as additional potentially disambiguating context is consideredin what follows we first explore heuristicbased approaches in 2 then we present learningbased approaches in 3next we present experimental results in 4 followed by related work in 5this section describes a set of heuristicbased methods for determining the polarity of a sentimentbearing expressioneach assesses the polarity of the words or constituents using a polarity lexicon that indicates whether a word has positive or negative polarity and finds negators in the given expression using a negator lexiconthe methods then infer the expressionlevel polarity using votingbased heuristics or heuristics that incorporate compositional semantics the lexicons are described in 23we first explore five simple heuristics based on votingvote is defined as the majority polarity vote by words in a given expressionthat is we count the number of positive polarity words and negative polarity words in a given expression and assign the majority polarity to the expressionin the case of a tie we default to the prevailing polarity of the datafor neg we first determine the majority polarity vote as above and then if the expression contains any functionword negator flip the polarity of the majority vote onceneg is similar to neg except we flip the polarity of the majority vote n times after the majority vote where n is the number of functionword negators in a given expressionnegex and negex are defined similarly as neg and neg above except both functionword negators and contentword negators are considered as negators when flipping the polarity of the majority votesee table 1 for summarynote that a word can be both a negator and have a negative prior polarityfor the purpose of voting if a word is defined as a negator per the voting scheme then that word does not participate in the majority votefor brevity we refer to neg and neg collectively as neg and negex and negex collectively as negexwhereas the heuristics above use votingbased inference those below employ a set of handwritten rules motivated by compositional semanticstable 2 shows the definition of the rules along with motivating examplesin order to apply a rule we first detect a syntactic pattern then apply the compose function as defined in table 2 by rule 23 compose first checks whether the first argument is a negator and if so flips the polarity of the second argumentotherwise compose resolves the polarities of its two argumentsnote that if the second argument is a negator we do not flip the polarity of the first argument because the first argument in general is not in the semantic scope of the negation4 instead we treat the second argument as a constituent with negative polaritywe experiment with two variations of the compose function depending on how conflicting polarities are resolved compomc uses a compose function that defaults to the majority class of the polarity of the data5 while compopr uses a compose function that selects the polarity of the argument that has higher semantic priorityfor brevity we refer to compopr and compomc collectively as compothe polarity lexicon is initialized with the lexicon of wilson et al and then expanded using the general inquirer dictionary6 in particular a word contained in at least two of the following categories is considered as positive positiv pstv posaff pleasur virtue increas and a word contained in at least one of the following categories is considered as negative negativ ngtv negaff pain vice hostile fail enlloss wlbloss tranlossfor the negator lexicon we collect a handful of seed words as well as general inquirer words that appear in either notlw or decreas categorythen we expand the list of contentnegators using the synonym information of wordnet to take a simple vote among senses based on parse trees might further improve the performance4moilanen and pulman provide more detailed discussion on the semantic scope of negations and the semantic priorities in resolving polaritieswhen consulting the general inquirer dictionary senses with less than 5 frequency and senses specific to an idiom are droppedwhile we expect that a set of handwritten heuristic rules motivated by compositional semantics can be effective for determining the polarity of a sentimentbearing expression we do not expect them to be perfectinterpreting natural language is such a complex task that writing a perfect set of rules would be extremely challengingtherefore a more ideal solution would be a learningbased method that can exploit ideas from compositional semantics while providing the flexibility to the rigid application of the heuristic rulesto this end we present a novel learningbased approach that incorporates inference rules inspired by compositional semantics into the learning procedure to assess the effect of compositional semantics in the learningbased methods we also experiment with a simple classification approach that does not incorporate compositional semantics the details of these two approaches are elaborated in the following subsectionsgiven an expression x consisting of n words xi xn the task is to determine the polarity y e positive negative of xin our simple binary classification approach x is represented as a vector of features f and the prediction y is given by argmaxywf where w is a vector of parameters learned from training datain our experiment we use an online svm algorithm called mira 7 for trainingfor each x we encode the following features a feature that indicates the dominant polarity of words in the given expression without considering the effect of negatorsfor scnegex we count the number of contentword negators as well as functionword negators to determine whether the final polarity should be flippedthen we add a conjunctive feature that indicates the dominant polarity together with whether the final polarity should be flippedfor brevity we refer to scvote and scnegex collectively as scnotice that in this simple binary classification setting it is inherently difficult to capture the compositional structure among words in x because f is merely a flat bag of features and the prediction is governed simply by the dot product of f and the parameter vector w next instead of determining y directly from x we introduce hidden variables z as intermediate decision variables where zi e positive negative negator none so that zi represents whether xi is a word with positivenegative polarity or a negator or none of the abovefor simplicity we let each intermediate decision variable zi be determined independently from other intermediate decision variables and depend only on the input x so that zi argmaxziw f where f is the feature vector encoding around the ith word once we determine the intermediate decision variables we apply the heuristic rules motivated by compositional semantics in order to obtain the final polarity y of xthat is y c where c is the function that applies the compositional inference either compopr or compomcfor training there are two issues we need to handle the first issue is dealing with the hidden variables zbecause the structure of compositional inference c does not allow dynamic programming it is intractable to perform exact expectationmaximization style training that requires enumerating all possible values of the hidden variables zinstead we propose a simple and tractable training rule based on the creation of a soft gold standard for zin particular we exploit the fact that in our task we can automatically construct a reasonably accurate gold standard for z denoted as z as shown in figure 2 we simply rely on the negator and polarity lexiconsbecause z is not always correct we allow the training procedure to replace z with potentially better assignments as learning proceeds in the event that the soft gold standard z leads to an incorrect prediction we search for an assignment that leads to a correct prediction to replace zthe exact procedure is given in figure 1 and will be discussed again shortlyfigure 1 shows how we modify the parameter update rule of mira to reflect the aspect of compositional inferencein the event that the soft gold standard z leads to an incorrect prediction we search for zgood the assignment with highest score that leads to a correct prediction and replace z with zgoodin the event of no such zgood being found among the kbest assignments of z we stick with zthe second issue is finding the assignment of z with the highest score w f that leads to an incorrect prediction y cbecause the structure of compositional inference c does not allow dynamic programming finding such an assignment is again intractablewe resort to enumerating only over kbest assignments insteadif none of the kbest assignments of z leads to an incorrect prediction y then we skip the training instance for parameter updatefeaturesfor each xi in x we encode the following features with unseen words in the test data we add features that describe word categories based on the general inquirer dictionarywe add this feature for each xi that is not a stop wordwe also add a number of boolean features that provide following properties of xi using the polarity lexicon and the negator lexicon whether xi is a functionword negator whether xi is a contentword negator whether xi is a negator of any kind the polarity of xi according to wilson et al s polarity lexicon the polarity of xi according to the lexicon derived from the general inquirer dictionary conjunction of the above two features as in the heuristicbased compositional semantics approach we experiment with two variations of this learningbased approach ccicompopr and ccicompomc whose compositional inference rules are compopr and compomc respectivelyfor brevity we refer to both variations collectively as ccicompothe experiments below evaluate our heuristic and learningbased methods for subsentential sentiment analysis in addition we explore the role of context by expanding the boundaries of the sentimentbearing expressions for evaluation we use the multiperspective question answering corpus which consists of 535 newswire documents manually annotated with phraselevel subjectivity informationwe evaluate on all strong sentimentbearing expressions8 as a result we can assume the boundaries of the expressions are givenperformance is reported using 10fold crossvalidation on 400 documents a separate 135 documents were used as a development setbased on pilot experiments on the development data we set parameters for mira as follows slack variable to 05 and the number of incorrect labels for each parameter update to 1the number of iterations for training is set to 1 for simple classification and to 4 for classification with compositional inferencewe use k 20 for classification with compositional inferenceresultsperformance is reported in table 3interestingly the heuristicbased methods neg that only consider functionword negators perform even worse than vote which does not consider negatorson the other hand the negex methods that do consider contentword negators as well as functionword negators perform better than votethis confirms the importance of contentword negators for determining the polarities of expressionsthe heuristicbased methods motivated by compositional semantics compo further improve the performance over negex achieving up to 897 accuracyin fact these heuristics perform even better than the sc learningbased methods this shows that heuristics that take into account the compositional structure of the expression can perform better than learningbased methods that do not exploit such structurefinally the learningbased methods that incorporate compositional inference ccicompo perform better than all of the previous methodsthe difference between ccicompopr and scnegex is statistically significant at the 05 level by paired ttestthe difference between compo and any other heuristic that is not based on computational semantics is also statistically significantin addition the difference between ccicompopr and compomc is statistically significant as is the difference between negex and voteone might wonder whether employing additional context outside the annotated expression boundaries could further improve the performanceindeed conventional wisdom would say that it is necessary to employ such contextual information in any case it is important to determine whether our results will apply to more realworld settings where humanannotated expression boundaries are not availableto address these questions we gradually relax our previous assumption that the exact boundaries of expressions are given for each annotation boundary we expand the boundary by x words for each direction up to sentence boundaries where x e 11 5 ociwe stop expanding the boundary if it will collide with the boundary of an expression with a different polarity so that we can consistently recover the expressionlevel gold standard for evaluationthis expansion is applied to both the training and test data and the performance is reported in table 4from this experiment we make the following observations mance for any methodthis shows that most of relevant context for judging the polarity is contained within the expression boundaries and motivates the task of finding the boundaries of opinion expressions the negex methods perform better than vote only when the expression boundaries are reasonably accuratewhen the expression boundaries are expanded up to sentence boundaries they perform worse than votewe conjecture this is because the scope of negators tends to be limited to inside of expression boundaries the compo methods always perform better than any other heuristicbased methodsand their performance does not decrease as steeply as the negex methods as the expression boundaries expandwe conjecture this is because methods based on compositional semantics can handle the scope of negators more adequately among the learningbased methods those that involve compositional inference always perform better than those that do not for any boundariesand learning with compositional inference tend to perform better than the rigid application of heuristic rules although the relative performance gain decreases once the boundaries are relaxedthe task focused on in this paper is similar to that of wilson et al in that the general goal of the task is to determine the polarity in context at a subsentence levelhowever wilson et al formulated the task differently by limiting their evaluation to individual words that appear in their polarity lexiconalso their approach was based on a flat bag of features and only a few examples of what we call contentword negators were employedour use of compositional semantics for the task of polarity classification is preceded by moilanen and pulman but our work differs in that we integrate the key idea of compositional semantics into learningbased methods and that we perform empirical comparisons among reasonable alternative approachesfor comparison we evaluated our approaches on the polarity classification task from semeval07 we achieve 886 accuracy with compopr 901 with scnegex and 876 with ccicompomc9 there are a number of possible reasons for our lower performance vs moilanen and pulman on this data setfirst semeval07 does not include a training data set for this task so we use 400 documents from the mpqa corpus insteadin addition the semeval07 data is very different from the mpqa data in that the polarity annotation is given only at the sentence level the sentences are shorter with simpler structure and not as many negators as the mpqa sentences and there are many more instances with positive polarity than in the mpqa corpusnairn et al also employ a polarity propagation algorithm in their approach to the semantic interpretation of implicativeshowever their notion of polarity is quite different from that assumed here and in the literature on sentiment analysisin particular it refers to the degree of commitment of the author to the truth or falsity of a complement clause for a textual entailment taskmcdonald et al use a structured model to determine the sentencelevel polarity and the documentlevel polarity simultaneouslybut decisions at each sentence level does not consider structural inference within the sentenceamong the studies that examined contentword negators niu et al manually collected a small set of such words but their lexicon was designed mainly for the medical domain and the type of negators was rather limitedwilson et al also manually collected a handful of contentword negators but not extensivelymoilanen and pulman collected a more extensive set of negators semiautomatically using wordnet 21 but the empirical effect of such words was not explicitly investigatedin this paper we consider the task of determining the polarity of a sentimentbearing expression considering the effect of interactions among words or constituents in light of compositional semanticswe presented a novel learningbased approach that incorporates structural inference motivated by compositional semantics into the learning procedureour approach can be considered as a small step toward bridging the gap between computational semantics and machine learning methodsour experimental results suggest that this direction of research is promisingfuture research includes an approach that learns the compositional inference rules from datathis work was supported in part by national science foundation grants bcs0624277 and iis0535099 and by department of homeland security grant n00140710152we also thank eric breck lillian lee mats rooth the members of the cornell nlp reading seminar and the emnlp reviewers for insightful comments on the submitted version of the paper
D08-1083
learning with compositional semantics as structural inference for subsentential sentiment analysisdetermining the polarity of a sentimentbearing expression requires more than a simple bagofwords approachin particular words or constituents within the expression can interact with each other to yield a particular overall polarityin this paper we view such subsentential interactions in light of compositional semantics and present a novel learningbased approach that incorporates structural inference motivated by compositional semantics into the learning procedureour experiments show that simple heuristics based on compositional semantics can perform better than learningbased methods that do not incorporate compositional semantics but a method that integrates compositional semantics into learning performs better than all other alternatives we also find that contentword negators not widely employed in previous work play an important role in determining expressionlevel polarityfinally in contrast to conventional wisdom we find that expressionlevel classification accuracy uniformly decreases as additional potentially disambiguating context is consideredcontentword negators are words that are not function words but act semantically as negatorswe combine different kinds of negators with lexical polarity items through various compositional semantic models both heuristic and machine learned to improve phrasal sentiment analysisswe propose an algorithm for phrasebased sentiment analysis that learns proper assignments of intermediate sentiment analysis decision variables given the a priori polarity of the words in the phrase and the phraselevel polaritywe handcode compositional rules in order to model compositional effects of combining different words in the phrasewe categorized polarity reversing words into two categories functionword negators such as not and contentword negators such as eliminate
a simple and effective hierarchical phrase reordering model while phrasebased statistical machine translation systems currently deliver stateofthe art performance they remain weak on word order changes current phrase reordering models can properly handle swaps between adjacent phrases but they typically lack theability to perform the kind of longdistance re orderings possible with syntaxbased systems in this paper we present a novel hierarchical phrase reordering model aimed at improvingnonlocal reorderings which seamlessly in tegrates with a standard phrasebased system with little loss of computational efficiency weshow that this model can successfully han dle the key examples often used to motivate syntaxbased systems such as the rotation of a prepositional phrase around a noun phrase we contrast our model with reordering models commonly used in phrasebased systems and show that our approach provides statistically significant bleu point gains for two language pairs chineseenglish and arabicenglish statistical phrasebased systems have consistently delivered stateoftheart performance in recent machine translation evaluations yet these systems remain weak at handling word order changesthe re ordering models used in the original phrasebasedsystems penalize phrase displacements proportionally to the amount of nonmonotonicity with no con sideration of the fact that some words are far more m m d s d eue nviro nme nt m inist ers hold mee tings in l uxem burg 01 23 45 67 8 the d evel opm ent and prog ress of the regi on d m d d figure 1 phase orientations for chinesetoenglish translationwhile previouswork reasonably models phrase reordering in simple ex amples it fails to capture more complex reorderings such as the swapping of of the regionlikely to be displaced than others recent efforts have directly addressed this issue by introducing lexicalized reordering models into phrasebased systems which condition reordering probabilities on the words of each phrase pairthese models distinguish three orientations with respect to the previous phrasemonotone swap anddiscontinuous and as such are primarily de signed to handle local reorderings of neighboring phrasesfig1 is an example where such a modeleffectively swaps the prepositional phrase in luxembourg with a verb phrase and where the noun min isters remains in monotone order with respect to the previous phrase eu environmentwhile these lexicalized reordering models have shown substantial improvements over unlexicalized phrasebased systems these models only have a 848limited ability to capture sensible long distance re orderings as can be seen in fig1the phrase of the region should swap with the rest of the noun phrase yet these previous approaches are unable to model this movement and assume the orientation of this phrase is discontinuous observe that in a shortened version of the same sentence the phrase orientation would be different even though the shortened version has es sentially the same sentence structurecoming from the other direction such observations about phrase reordering between different languages are precisely the kinds of facts that parsing approaches to machinetranslation are designed to handle and do success fully handle in this paper we introduce a novel orientationmodel for phrasebased systems that aims to bet ter capture long distance dependencies and that presents a solution to the problem illustrated in fig1in this example our reordering modeleffectively treats the adjacent phrases the develop ment and and progress as one single phrase and the displacement of of the region with respect to thisphrase can be treated as a swapto be able iden tify that adjacent blocks can be merged into larger blocks ourmodel infers binary trees reminis cent of crucially our work distinguishes itself from previous hierarchical models in that it does not rely on any cubictimeparsing algorithms such as cky or the earley algorithm since our reordering model doesnot attempt to resolve natural language ambiguities we can effectively rely on shiftreduce parsing which is done jointly with lefttoright phrasebased beam decoding and thus intro duces no asymptotic change in running timeassuch the hierarchical model presented in this paper maintains all the effectiveness and speed advantages of statistical phrasebased systems while be ing able to capture some key linguistic phenomena which have motivated the development of parsingbased approacheswe also illustrate this with results that are significantly better than previous approaches in particular the lexical reordering models of moses a widely used phrasebased smt system this paper is organized as follows the train ing of lexicalized reordering models is described in section 3in section 4 we describe how to combine shiftreduce parsing with lefttoright beamsearch phrasebased decoding with the same asymptotic running time as the original phrasebased decoderwe finally show in section 6 that our ap proach yields results that are significantly better thanprevious approaches for two language pairs and dif ferent test setswe compare our reordering model with related work using aloglinear approach common to many stateofthe art statistical machine translation systems given an input sentence f which is to be translated into a target sentence e the decodersearches for the most probable translation eaccord ing to the following decision rule e argmax e p argmax e j j1 jh j h j are j arbitrary feature functions over sentence pairsthese features include lexicalized reordering models which are parameterized as follows given an input sentence f a sequence of targetlanguage phrases e currently hypothesized by the decoder and a phrase alignment a that defines a source f ai for eachtranslated phrase ei these models estimate the prob ability of a sequence of orientations o p n i1 p where each oi takes values over the set of possi ble orientations o msd1 the probability is conditioned on both ai1 and ai to make sure that the label oi is consistent with the phrase alignmentspecifically probabilities in these models can be 1we note here that the parameterization and terminology in is slightly differentwe purposely ignore thesedifferences in order to enable a direct comparison between till mans moses and our approach849 b i b i b i s you v you v uv s s figure 2 occurrence of a swap according to the threeorientation models wordbased phrasebased and hier archicalblack squares represent word alignments and gray squares represent blocks identified by phraseextractin block bi is recognized as a swap accord ing to all three modelsin bi is not recognized as a swap by the wordbased modelin bi is recognized as a swap only by the hierarchical modelgreater than zero only if one of the following con ditions is true oi m and ai ai1 1 oi s and ai ai1 1 oi d and ai ai1 6 1at decoding time rather than using the log probability of eq3 as single feature function we follow the approach of moses which is to assign three distinct parameters for the three feature functions fm ni1 log pfs ni1 log pfd ni1 log pthere are two key differences between this work and previous orientation models the estimation of factors in eq3 from data the segmentation of e and f into phrases which is static in the case of while it is dynamically updatedwith hierarchical phrases in our casethese differ ences are described in the two next sectionswe present here three approaches for computingp on wordaligned data using rel ative frequency estimateswe assume here that phrase ei spans the word range s t in the target sentence e and that the phrase f ai spans the range orientation model oi m oi s oi d wordbased 01750 00159 08092 phrasebased 03192 00704 06104 hierarchical 04878 01004 04116table 1 class distributions of the three orientation mod els estimated from 12m words of chineseenglish data using the growdiag alignment symmetrization heuristic implemented in moses which is similar to the refinedheuristic of you v in the source sentence f all phrase pairs inthis paper are extracted with the phraseextract algo rithm with maximum length set to 7wordbased orientation model this model an alyzes word alignments at positions and in the alignment grid shown in fig2specifically orientation is set to oi m if contains a word alignment and contains no word alignmentit is set to oi s if contains no word alignment and contains a word alignmentin all other cases it is set to oi d this procedure is exactly the same as the one implemented in moses2 phrasebased orientation model the modelpresented in is similar to the word based orientation model presented above except that it analyzes adjacent phrases rather than specificword alignments to determine orientationsspecif ically orientation is set to oi m if an adjacent phrase pair lies at in the alignmentgridit is set to s if an adjacent phrase pair cov ers and is set to d otherwisehierarchical orientation model this model analyzes alignments beyond adjacent phrasesspecifically orientation is set to oi m if the phrase extract algorithm is able to extract a phrase pair at given no constraint on maximum phrase lengthorientation is s if the same is true at and orientation is d otherwisetable 1 displays overall class distributions according to the three modelsit appears clearly that occurrences of m and s are too sparsely seen in the word based model which assigns more than 80 of its 2httpwwwstatmtorgmosesnmosesadvancedfeatures 850 word phrase hiermonotone with previous p 1 4 and is 0223 0672 0942 2 and also 0201 0560 0948 swap with previous p 3 of china 0303 0617 0651 4 he said 0003 0030 0395 monotone with next p 5 he pointed out that 0601 0770 0991 6 l however 0517 0728 0968 swap with next p 7 0 the development of 0145 0831 0900 8 at the invitation of 0272 0834 0925 table 2 monotone and swap probabilities for specific phrases according to the three models to ensure probabilities are representative we only selected phrase pairs that occur at least 100 times in the training dataprobability mass to d conversely the hierarchical model counts considerably less discontinuous cases and is the only model that accounts for the fact that real data is predominantly monotonesince d is a rather uninformative default cat egory that gives no clue how a particular phraseshould be displaced we will also provide mt evalu ation scores for a set of classes that distinguishes between left and right discontinuitymsdldr a choice that is admittedly more lin guistically motivatedtable 2 displays orientation probabilities for con crete exampleseach example was put under one of the four categories that linguistically seems thebest match and we provide probabilities for that cat egory according to each modelnote that whilewe have so far only discussed lefttoright reorder ing models it is also possible to build righttoleftmodels by substituting ai1 with ai1 in eq3ex amples for righttoleft models appear in the second half of the tablethe table strongly suggests that the hierarchical model more accurately determinesthe orientation of phrases with respect to large contextual blocksin examples 1 and 2 the hierarchi cal model captures the fact that coordinated clauses almost always remain in the same order and that words should generally be forbidden to move from one side of andto the other side a constraint thatis difficult to enforce with the other two reorder ing modelsin example 4 the first two models completely ignore that he saidsometimes rotates around its neighbor clausecomputing reordering scores during decoding with wordbased3 and phrasebased models is trivial since they only make use of localinformation to determine the orientation of a new in coming block bifor a lefttoright ordering model bi is scored based on its orientation with respect to bi1for instance if bi has a swap orientation withrespect to the previous phrase in the current translation hypothesis feature pbecomes ac tivecomputing lexicalized reordering scores with the hierarchical model is more complex since the model must identify contiguous blocksmonotone or swappingthat can be merged into hierarchical blocksthe employed method is an instance of thewellknown shiftreduce parsing algorithm and re lies on a stack of foreign substrings that have already been translatedeach time the decoder adds a new block to the current translation hypothesis it shifts the sourcelanguage indices of the block ontos then repeatedly tries reducing the top two ele ments of s if they are contiguous4 this parsingalgorithm was first applied in computational geome try to identify convex hulls and its running time was shown to be linear in the length of the sequence which applies the same algorithm to the binarization of scfg rulesfigure 3 provides an example of the execution of this algorithm for the translation output shownin figure 4 which was produced by a decoder in corporating our hierarchical reordering modelthe decoder successively pushes sourcelanguage spans 1 2 3 which are successively merged into 13 and all correspond to monotone orientations3we would like to point out an inconsistency in moses be tween training and testingdespite the fact that moses estimates a wordbased orientation model during training this model is then treated as a phrasebased orien tation model during testing 4it is not needed to store targetlanguage indices onto thestack since the decoder proceeds left to right and thus suc cessive blocks are always contiguous with respect to the target language851 target phrase source opoi stack the russian side 1 s m hopes 2 r m 1 to 3 r m 12 hold 11 s d 13 consultations 12 r m 11 13 with iran 910 r s 1112 13 on this 67 s d 912 13 issue 8 rr m 67 912 13 in the near future 45 rr s 612 13 13 ra m 112figure 3 the application of the shiftreduce parsing algorithm for identifying hierarchical blocksthis execu tion corresponds to the decoding example of figure 4operations include shift reduce and accept the source and stack columns contain source language spans which is the only information needed to determine whether two given blocks are contiguousoi isthe label predicted by the hierarchical model by compar ing the current block to the hierarchical phrase that is at the top of the stack0 12 34 56 the russi an side hope s to hold cons ultati ons with iran on this issue in the near future h 1 h 2 h 3 figure 4 output of our phrasebased decoder using the hierarchical model on a sentence of mt06hierarchical phrases h1 and h2 indicate that with iran and in the near future have a swap orientationh3 indicates that toand are monotonein this particular example distortion limit was set to 10it then encounters a discontinuity that prevents the next block 11 from being merged with 13as the decoder reaches the last words of the sentence 45 is successively merged with 612 then 13 yielding a stack that contains only 112a nice property of this parsing algorithm is that it does not worsen the asymptotic running time of beamsearch decoders such as moses such decoders run in time o where n is the length of the input sentenceindeed each time a partial translation hypothesis is expanded intoa longer one the decoder must perform an o op eration in order to copy the coverage set into the new hypothesissince this copy operationmust be executed o times the overall time complexity is quadraticthe incorporation of the shift reduce parser into such a decoder does not worsenoverall time complexity whenever the decoder expands a given partial translation into a longer hy pothesis it simply copies its stack into the newlycreated hypothesis operationhence the incorporation of the hierarchical models described in the paper into a phrasebased decoder preserves the o running timein practice we observe based on a set of experiments for chineseenglish and arabicenglish translation that our phrasebased decoder is on average only 135 times slower when it is running using hierarchical reordering features and the shiftreduce parserwe finally note that the decoding algorithm presented in this section can only be applied leftto right if the decoder itself is operating lefttorightin order to predict orientations relative to the righttoleft hierarchical reordering model we must resort to approximations at decoding timewe experi mented with different approximations and the one that worked best is described as followsfirst we note that an analysis of the alignment grid often reveals that certain orientations are impossiblefor instance the block issue in figure 4 can only have discontinuousorientation with respect to what comes next in en glish since words surrounding the chinese phrasehave already been translatedwhen several hier archical orientations are possible according to thealignment grid we choose according to the follow ing order of preference monotone swap discontinuousfor instance in the case of with iranin figure 4 only swap and discontinuous orientations are possible hence we give preference to swapthis prediction turns out to be the correct one according to the decoding 852 steps that complete the alignment gridwe now analyze the system output of figure 4 to fur ther motivate the hierarchical model this time from the perspective of the decoderwe first observe that the prepositional phrase in the future should rotatearound a relatively large noun phrase headed by consultationsunfortunately localized reordering models such as have no means of identifying that such a displacement is a swap accord ing to these models the orientation of in the futurewith respect to what comes previously is discontinuous which is an uninformative fallback categoryby identifying h2 as a hierarchical block the hierarchical model can properly deter mine that the block in the near future should have a swap orientation5 similar observations can be made regarding blocks h1 and h3 which leads our model to predict either monotone orientation or swap orienta tion while local models would predict discontinuous in all casesanother benefit of the hierarchical model is thatits representation of phrases remains the same dur ing both training and decoding which is not the casefor wordbased and phrasebased reordering mod elsthe deficiency of these local models lies in thefact that blocks handled by phrasebased smt sys tems tend to be long at training time and short attest time which has adverse consequences on nonhierarchical reordering modelsfor instance in fig ure 4 the phrasebased reordering model categorizes the block in the near future as discontinuous though if the sentence pair had been a training examplethis block would count as a swap because of the ex tracted phrase on this issuein our experiments we use a reimplementationof the moses decoder except for lexical reordering models all other fea tures are standard features implemented almost5note that the hierarchical phrase hold issue is not a well formed syntactic phrase ie it neither matches the bracketing of the verb phrase hold future nor matches the noun phrase consultations issue yet it enables sensible reorderingexactly as in moses four translation features word penalty phrase penalty linear distortion and language model scorewe experiment with two language pairs chinese toenglish and arabictoenglish for ce we trained translation models using a subset of the chineseenglish parallel data released by ldc this subset comprises 122m english words and 11m chinese wordschinese words are segmented with a conditional random field classifier that conforms to the chinese treebank standardthe training set for our ae systems also includes mostly news parallel data released by ldc and contains 195m english words and 187m arabic tokens that have been segmented using the arabic treebank standard6 for our language model we trained a 5gram model using the xinhua and afp sections of the gigaword corpus in addition to the target side of the parallel datafor both ce and ae we manually removed documents of gigaword that were released during periods that overlap with those of our development and test setsthe language model was smoothed with the modified kneserney algorithm and we kept only trigrams 4grams and 5grams that respectively occurred two three and three times in the training dataparameters were tuned with minimum errorrate training on the nist evaluation set of 2006 for both ce and aesince mertis prone to search errors especially with large num bers of parameters we ran each tuning experimentfour times with different initial conditionsthis pre caution turned out to be particularly important in the case of the combined lexicalized reordering models since mert must optimize up to 26 parameters at once in these cases7 for testing 6catalog numbers for ce ldc2002e18 ldc2003e07 ldc2003e14 ldc2005e83 ldc2005t06 ldc2006e26 and ldc2006e8for ae ldc2007e103 ldc2005e83 ldc2006e24 ldc2006e34 ldc2006e85 ldc2006e92 ldc2007e06 ldc2007e101 ldc2007e46 ldc2007e86 and ldc2008e407we combine lexicalized reordering models by simply treat ing them as distinct features which incidentally increases the number of model parameters that must be tuned with mert853 305 31 315 32 325 33 335 34 0 2 4 6 8 10 12 14 bl eu ch ines ee ngli sh distortion limit hierarchicalphrasebased wordbasedbaseline 43 435 44 445 45 455 0 2 4 6 8 10 bl eu arabic eng lish distortion limit hierarchicalphrasebased wordbasedbaseline figure 5 performance on the chineseenglish andarabicenglish development sets with increasing distortion limits for all lexicalized reordering mod els discussed in the paperour novel hierarchical model systematically outperforms all other models for distortion limit equal to or greater than 4the baseline is moses with no lexicalized reordering modelwe used the nist evaluation sets of 2005 and 2008 for chineseenglish and the test set of 2005 for arabicenglishstatistical significance is computed using the approximate randomization test whose application to mt evaluation was shown to be less sensitive totypei errors than the perhaps more widely used bootstrap resampling method tuning set performance is shown in figure 5since this paper studies various ordering modelsit is interesting to first investigate how the distor lexicalized reordering mt06 mt05 mt08 none 3185 2975 2522 wordbased 3296 3145 2586 phrasebased 3324 3123 2601 hierarchical 3380 3220 2638 phrasebased hierarchical 3386 3285 2653 table 3 bleu scores for chineseenglishand the orientation categories msdmaximum dis tortion is set to 6 words which is the default in mosesthe stars at the bottom of the tables indicate when a given hierarchical model is significantly better than all localmodels for a given development or test set lexicalized reordering mt06 mt05 mt08 phrasebased 3379 3232 2632 hierarchical 3401 3235 2658 phrasebased hierarchical 3436 3233 2703 table 4 bleu scores for chineseenglish and the orientation categories msdl drsince the distinction between these four categories is not available in moses hence we have no baseline results for this casemaximum distortion is set to 6 wordstion limit affects performance8 as has been shownin previous work in chineseenglish and arabic english translation limiting phrase displacements to six sourcelanguage words is a reasonable choicefor both ce and ae the hierarchical model is sig nificantly better than either other modelsfor distortion limits equal to or greater than 6 since a distortion limit of 6 works reasonably well for both language pairs and is the default in moses we used this distortion limit value for all testset experiments presented in this paperour main results for chineseenglish are shownin table 3it appears that hierarchical models provide significant gains over all nonhierarchical modelsimprovements on mt06 and mt05 are very sig nificant in the case of mt08 significant improvement is reached through the combination ofboth phrasebased and hierarchical modelswe of ten observe substantial gains when we combine such models presumably because we get the benefit of identifying both local and longdistance swapssince most orientations in the phrasebased model are discontinuous it is reasonable to ask whether8note that we ran mert separately for each distinct distor tion limit854 lexicalized reordering mt06 mt05 none 4403 5487 wordbased 4464 5496 phrasebased 4501 5509 hierarchical 4551 5550 phrasebased hierarchical 4564 5601 table 5 bleu scores for arabicenglish and the reordering categories msdlexicalized reordering mt06 mt05 phrasebased 4474 5552 hierarchical 4553 5602 phrasebased hierarchical 4563 5607 table 6 bleu scores for arabicenglish and the reordering categories msdl drthe relatively poor performance of the phrasebasedmodel is the consequence of an inadequate set of ori entation labelsto try to answer this question weuse the set of orientation labels msdldr de scribed in section 3results for this different set oforientations are shown in table 4while the phrasebased model appears to benefit more from the distinction between left and rightdiscontinuous sys tems that incorporate hierarchical models remain the most competitive overall their best performance on mt06 mt05 and mt08 are respectively 3436 3285 and 2703the best nonhierarchical models achieve only 3379 3232 and 2632 respectivelyall these differences are sta tistically significant at the 05 levelour results for arabicenglish are shown in ta bles 5 and 6similarly to ce we provide results for two orientation sets msd and msdldrwe note that the fourclass orientation set is overall less effective for ae than for cethis is probably due to the fact that there is less probability mass in ae assigned to the d category and thus it is less helpful to split the discontinuous category into twofor both orientation sets we observe in ae that the hierarchical model significantly outperforms thelocal ordering modelsgains provided by the hierarchical model are no less significant than for chinese toenglishthis positive finding is perhaps a bitsurprising since arabictoenglish translation gen erally does not require many word order changes compared to chinesetoenglish translation and thistranslation task so far has seldom benefited from hierarchical approaches to mt in our case one possi ble explanation is that arabicenglish translation is benefiting from the fact that orientation predictionsof the hierarchical model are consistent across train ing and testing which is not the case for the otherordering models discussed in this paper overall hierarchical models are the most effective on the two sets their best performances on mt06 and mt05 are respectively 4564 and 5607the best nonhierarchical models obtain only 4501 and 5552 respectively for the same setsall thesedifferences are statistically signifi cannot at the 05 levelin this paper we presented a lexicalized orientation model that enables phrase movements that are more complex than swaps between adjacent phrasesthis model relies on a hierarchical structure that is builtas a byproduct of lefttoright phrasebased decod ing without increase of asymptotic running timeweshow that this model provides statistically signifi cannot improvements for five nist evaluation sets and for two language pairsin future work we plan to extend the parameterization of our models to not only predict phrase orientation but also the length of each displacement as in we believe such an extension would improve translation quality in the case of larger distortionlimitswe also plan to experiment with discriminative approaches to estimating reordering probabil ities whichcould also be applied to our workwe think the abil ity to condition reorderings on any arbitrary featurefunctions is also very effective in the case of our hi erarchical model since information encoded in thetrees would seem beneficial to the orientation pre diction taskthe authors wish to thank the anonymous reviewers for their comments on an earlier draft of this paperthis paper is based on work funded by the defense advanced research projects agency through ibmthe content does not necessarily reflect the views of the yous government and no official endorsement should be inferred855
D08-1089
a simple and effective hierarchical phrase reordering modelwhile phrasebased statistical machine translation systems currently deliver stateofthe art performance they remain weak on word order changescurrent phrase reordering models can properly handle swaps between adjacent phrases but they typically lack the ability to perform the kind of longdistance reorderings possible with syntaxbased systemsin this paper we present a novel hierarchical phrase reordering model aimed at improving nonlocal reorderings which seamlessly integrates with a standard phrasebased system with little loss of computational efficiencywe show that this model can successfully handle the key examples often used to motivate syntaxbased systems such as the rotation of a prepositional phrase around a noun phrasewe contrast our model with reordering models commonly used in phrasebased systems and show that our approach provides statistically significant bleu point gains for two language pairs chineseenglish and arabicenglish our hierarchical orientation model captures nonlocal phrase reordering by a shift reduce algorithmwe introduce a deterministic shiftreduce parser into decoding so that the decoder always has access to the largest possible previous block given the current translation historywe introduce three orientation models for lexicalized reordering wordbased phrasebased and hierarchical orientation model
two languages are better than one we show that jointly parsing a bitext can substantially improve parse quality on both sides in a maximum entropy bitext parsing model we define a distribution over source trees target trees and nodetonode alignments between them features include monolingual parse scores and various measures of syntactic divergence using the translated portion of the chinese treebank our model is trained iteratively to maximize the marginal likelihood of training tree pairs with alignments treated as latent variables the resulting bitext parser outperforms stateoftheart monoparser baselines by 25 predicting side trees and 18 predicting chinese side trees moreover these improved trees yield a 24 bleu increase when used in a downstream mt evaluation methods for machine translation have increasingly leveraged not only the formal machinery of syntax but also linguistic tree structures of either the source side the target side or both these methods all rely on automatic parsing of one or both sides of input bitexts and are therefore impacted by parser qualityunfortunately parsing general bitexts well can be a challenge for newswiretrained treebank parsers for many reasons including outofdomain input and tokenization issueson the other hand the presence of translation pairs offers a new source of information bilingual constraintsfor example figure 1 shows a case where a stateoftheart english parser has chosen an incorrect structure which is incompatible with the output of a comparable chinese parsersmith and smith previously showed that such bilingual constraints can be leveraged to transfer parse quality from a resourcerich language to a resourceimpoverished onein this paper we show that bilingual constraints and reinforcement can be leveraged to substantially improve parses on both sides of a bitext even for two resourcerich languagesformally we present a loglinear model over triples of source trees target trees and nodetonode tree alignments between themwe consider a set of core features which capture the scores of monolingual parsers as well as measures of syntactic alignmentour model conditions on the input sentence pair and so features can and do reference input characteristics such as posterior distributions from a wordlevel aligner our training data is the translated section of the chinese treebank so at training time correct trees are observed on both the source and target sidegold tree alignments are not present and so are induced as latent variables using an iterative training procedureto make the process efficient and modular to existing monolingual parsers we introduce several approximations use of kbest lists in candidate generation an adaptive bound to avoid considering all k2 combinations and viterbi approximations to alignment posteriorswe evaluate our system primarily as a parser and secondarily as a component in a machine translation pipelinefor both english and chinese we begin with the stateoftheart parsers presented in petrov and klein as a baselinejoint parse selection improves the english trees by 25 f1 and the chinese trees by 18 f1while other chinese treebank parsers do not have access to english side translations this chinese figure does outperform all published monolingual chinese treebank results on an equivalent split of the dataas mt motivates this work another valuable evaluation is the effect of joint selection on downstream mt qualityin an experiment using a syntactic mt system we find that rules extracted from joint parses results in an increase of 24 bleu points over rules extracted from independent parses1 in sum jointly parsing bitexts improves parses substantially and does so in a way that that carries all the way through the mt pipelinein our model we consider pairs of sentences where we use the convention that unprimed variables are source domain and primed variables are target domainthese sentences have parse trees t taken from candidate sets t nonterminal nodes in trees will be denoted by n and we abuse notation by equating trees with their node setsalignments a are simply atmostonetoone matchings between a pair of trees t and t note that we will also mention word alignments in feature definitions a and the unqualified term alignment will always refer to node alignmentswords in a sentence are denoted by v our model is a general loglinear distribution over triples for sentence pairs features are thus defined over triples we discuss specific features belowto use our model we need features of a triple which encode both the monolingual quality of the trees as well as the quality of the alignment between themwe introduce a variety of features in the next sectionsto capture basic monolingual parse quality we begin with a single source and a single target feature whose values are the log likelihood of the source tree t and the target tree t respectively as given by our baseline monolingual parsersthese two features are called sourcell and targetll respectivelyit is certainly possible to augment these simple features with what would amount to monolingual reranking features but we do not explore that option herenote that with only these two features little can be learned all positive weights w because the jointly optimal parse pair to comprise the two top1 monolingual outputs all other features in our model reference the entire triple in this work such features are defined over aligned node pairs for efficiency but generalizations are certainly possiblebias the first feature is simply a bias feature which has value 1 on each aligned node pair this bias allows the model to learn a general preference for denser alignmentsalignment features of course some alignments are better than othersone indicator of a good nodetonode alignment between n and n is that a good word alignment model thinks that there are many wordtoword alignments in their bispansimilarly there should be few alignments that violate that bispanto compute such features we define a to be the posterior probability assigned to the word alignment between v and v by an independent word aligner2 before defining alignment features we need to define some additional variablesfor any node n e t the inside span i comprises the input tokens of s dominated by that nodesimilarly the complement the outside span will be denoted o and comprises the tokens not dominated by that nodesee figure 2bc for examples of the resulting regionshard alignment features we also define the hard versions of these features which take counts from the word aligners hard top1 alignment output s scaled alignment features finally undesirable larger bispans can be relatively sparse at the word alignment level yet still contain many good word alignments simply by virtue of being largewe therefore define a scaled count which measures density rather than totalsthe geometric mean of span lengths was a superior measure of bispan area than the true area because wordlevel alignments tend to be broadly onetoone in our word alignment modelhead word alignment features when considering a node pair especially one which dominates a large area the above measures treat all spanned words as equally importanthowever lexical heads are generally more representative than other spanned wordslet h select the headword of a node according to standard head percolation rules we also consider features that measure correspondences between the tree structures themselvesspan difference we expect that in general aligned nodes should dominate spans of roughly the same length and so we allow the model to learn to penalize node pairs whose inside span lengths differ greatlynumber of children we also expect that there will be correspondences between the rules of the cfgs that generate the trees in each languageto encode some of this information we compute indicators of the number of children c that the nodes have in t and tnumchildren c 1 for each feature above we create labelspecific versions by conjoining the label pair we use both the typed and untyped variants of all featuresrecall that our data condition supplies sentence pairs along with gold parse pairs we do not observe the alignments a which link these parsesin principle we want to find weights which maximize the marginal log likelihood of what we do observe given our sentence pairs3 child labels in addition we also encode whether w arg max ep certain label pairs occur as children of matched w a nodeslet c select the children of n with la arg max ea exp bel w a exp childlabel c c note that the corresponding self labels feature is not listed because it arises in the next section as a typed variant of the bias featurethere are several challengesfirst the space of symmetric atmostonetoone matchings is phard to sum over exactly second even without matchings to worry about standard methods for maximizing the above formulation would require summation over pairs of trees and we want to assume a fairly generic interface to independent monolingual parsers as we have chosen to operate in a reranking mode over monolingual kbest lists we have another issue our kbest outputs on the data which trains our model may not include the gold tree pairwe therefore make several approximations and modifications which we discuss in turnbecause summing over alignments a is intractable we cannot evaluate or its derivativeshowever if we restrict the space of possible alignments then we can make this optimization more feasibleone way to do this is to stipulate in advance that for each tree pair there is a canonical alignment a0of course we want a0 to reflect actual correspondences between t and t0 so we want a reasonable definition that ensures the alignments are of reasonable qualityfortunately it turns out that we can efficiently optimize a given a fixed tree pair and weight vector this optimization requires only that we search for an optimal alignmentbecause all our features can be factored to individual node pairs this can be done with the hungarian algorithm in cubic time4 note that we do not enforce any kind of domination consistency in the matching for example the optimal alignment might in principle have the source root aligning to a target nonroot and vice versawe then define a0 as the alignment that maximizes w0 o where w0 is a fixed initial weight vector with a weight of 1 for insideboth 1 for insrcouttrg and intrgoutsrc and 0 for all other featuresthen we simplify by fixing the alignments a0 this optimization has no latent variables and is therefore convex and straightforwardhowever while we did use this as a rapid training procedure during development fixing the alignments a priori is both unsatisfying and also less effective than a procedure which allows the alignments a to adapt during trainingagain for fixed alignments a optimizing w is easysimilarly with a fixed w finding the optimal a for any particular tree pair is also easyanother option is therefore to use an iterative procedure that alternates between choosing optimal alignments for a fixed w and then reoptimizing w for those fixed alignments according to by iterating we perform the following optimization note that is just with summation replaced by maximizationthough we do not know of any guarantees for this themlike algorithm in practice it converges after a few iterations given sufficient training datawe initialize the procedure by setting w0 as defined abovewhen training our model we approximate the sets of all trees with kbest lists t and t0 produced by monolingual parserssince these sets are not guaranteed to contain the gold trees g and g0 our next approximation is to define a set of pseudogold trees following previous work in monolingual parse reranking we define tˆ as the f1optimal subset of t we then modify to reflect the fact that we are seeking to maximize the likelihood of trees in this subset to reduce the time and space requirements for training we do not always use the full kbest liststo prune the set t we rank all the trees in t from 1 to k according to their log likelihood under the baseline parsing model and find the rank of the least likely pseudogold tree finally we restrict t based on rank to prune the list of tree pairs first we rank them according to the metric where e is a free parameter of the pruning procedurethe restricted set t0pruned is constructed in the same waywhen training we replace the sum over all tree pairs in in the denominator of with a sum over all tree pairs in the parameter e can be set to any value from 0 to k with lower values resulting in more efficient training and higher values resulting in better performancewe set e by empirically determining a good speedperformance tradeoff at test time we have a weight vector w and so selecting optimal trees for the sentence pair from a pair of k best lists is straightforwardwe just find note that with no additional cost we can also find the optimal alignment between t and t0 because the size of grows as o the time spent iterating through all these tree pairs can grow unreasonably long particularly when reranking a set of sentence pairs the size of a typical mt corpusto combat this we use a simple pruning technique to limit the number of tree pairs under considerationthen we simply remove all tree pairs whose ranking falls below some empirically determined cutoffas we show in 63 by using this technique we are able to speed up reranking by a factor of almost 20 without an appreciable loss of performanceall the data used to train the joint parsing model and to evaluate parsing performance were taken from articles 1325 of the chinese treebank which all have english translations with goldstandard parse treesthe articles were split into training development and test sets according to the standard breakdown for chinese parsing evaluationsnot all sentence pairs could be included for various reasons including onetomany chineseenglish sentence alignments sentences omitted from the english translations and lowfidelity translationsadditional sentence pairs were dropped from the training data because they had unambiguous parses in at least one of the two languagestable 1 shows how many sentences were included in each datasetwe had two training setups rapid and fullin the rapid training setup only 1000 sentence pairs from the training set were used and we used fixed alignments for each tree pair rather than iterating the full training setup used the iterative training procedure on all 2298 training sentence pairswe used the english and chinese parsers in petrov and klein 5 to generate all kbest lists and as our evaluation baselinebecause our bilingual data is from the chinese treebank and the data typically used to train a chinese parser contains the chinese side of our bilingual training data we had to train a new chinese grammar using only articles 4001151 this modified grammar was used to generate the kbest lists that we trained our model onhowever as we tested on the same set of articles used for monolingual chinese parser evaluation there was no need to use a modified grammar to generate kbest lists at test time and so we used a regularly trained chinese parser for this purposewe also note that since all parsing evaluations were performed on chinese treebank data the chinese test sentences were indomain whereas the english sentences were very far outofdomain for the penn treebanktrained baseline english parserhence in these evaluations chinese scores tend to be higher than english onesposterior word alignment probabilities were obtained from the word aligner of liang et al and denero and klein 6 trained on approximately 17 million sentence pairsfor our alignment model we used an hmm in each direction trained to agree and we combined the posteriors using denero and kleins soft union methodunless otherwise specified the maximum value of k was set to 100 for both training and testing and all experiments used a value of 25 as the c parameter for training set pruning and a cutoff rank of 500 for test set pruningto verify that all our features were contributing to the models performance we did an ablation study removing one group of features at a timetable 2 shows the f1 scores on the bilingual development data resulting from training with each group of features removed7 note that though head word features seemed to be detrimental in our rapid training setup earlier testing had shown a positive effect so we reran the comparison using our full training setup where we again saw an improvement when including these featuresto find a good value of the c parameter for training set pruning we tried several different values using our rapid training setup and testing on the dev setthe results are shown in table 3we selected 25 as it showed the best performancespeed tradeoff on average performing as well as if we had done no pruning at all while requiring only a quarter the memory and cpu timewe also tried several different values of the rank cutoff for test set pruning using the full training setup and testing on the dev setthe results are in table 4for f1 evaluation which is on a very small set of sentences we selected 500 as the value with the best speedperformance tradeoffhowever when reranking our entire mt corpus we used a value of 200 sacrificing a tiny bit of performance for an extra factor of 2 in speed8 since our bitext parser currently operates as a reranker the quality of the trees is limited by the quality of the kbest lists produced by the baseline parsersto test this limitation we evaluated performance on the dev set using baseline kbest lists of varying lengthtraining parameters were fixed and test set pruning was disabled for these experimentsthe results are in table 5the relatively modest gains with increasing k even as the oracle scores continue to improve indicate that performance is limited more by the models reliance on the baseline parsers than by search errors that result from the reranking approachour final evaluation was done using the full training setuphere we report f1 scores on two sets of datafirst as before we only include the sentence pairs from our bilingual corpus to fully demonstrate the gains made by joint parsingwe also report scores on the full test set to allow easier comparison with past work on chinese parsingfor the latter evaluation sentences that were not in the bilingual corpus were simply parsed with the baseline parsersthe results are in table 6joint parsing improves f1 by 25 points on outofdomain english sentences and by 18 points on indomain chinese sentences this represents the best published chinese treebank parsing performance even after sentences that lack a translation are taken into accountto test the impact of joint parsing on syntactic mt systems we compared the results of training an mt system with two different sets of trees those produced by the baseline parsers and those produced by our joint parserfor this evaluation we used a syntactic system based on galley et al and galley et al which extracts treetostring transducer rules based on targetside treeswe trained the system on 150000 chineseenglish sentence pairs from the training corpus of wang et al and used a large 4gram lanwith trees output from either baseline monolingual parsers or our joint parserto facilitate relative comparison the moses number listed reflects the default moses configuration including its full distortion model and standard training pipeline guage model for decodingwe tuned and evaluated bleu on separate heldout sets of sentences of up to length 40 from the same corpusthe results are in table 7 showing that joint parsing yields a bleu increase of 249by jointly parsing sentences in a translation pair it is possible to exploit mutual constraints that improve the quality of syntactic analyses over independent monolingual parsingwe presented a joint loglinear model over source trees target trees and nodetonode alignments between them which is used to select an optimal tree pair from a kbest liston chinese treebank data this procedure improves f1 by 18 on chinese sentences and by 25 on outofdomain english sentencesfurthermore by using this joint parsing technique to preprocess the input to a syntactic mt system we obtain a 24 bleu improvementwe would like to thank the anonymous reviewers for helpful comments on an earlier draft of this paper and adam pauls and jing zheng for help in running our mt experiments
D08-1092
two languages are better than one we show that jointly parsing a bitext can substantially improve parse quality on both sidesin a maximum entropy bitext parsing model we define a distribution over source trees target trees and nodetonode alignments between themfeatures include monolingual parse scores and various measures of syntactic divergenceusing the translated portion of the chinese treebank our model is trained iteratively to maximize the marginal likelihood of training tree pairs with alignments treated as latent variablesthe resulting bitext parser outperforms stateoftheart monolingual parser baselines by 25 f at predicting english side trees and 18 f at predicting chinese side trees moreover these improved trees yield a 24 bleu increase when used in a downstream mt evaluationin bitext parsing we use feature functions defined on triples of combined in a loglinear model trained to maximize parse accuracywe use word alignment density features which measure how well the aligned entity pair matches up with alignments from an independent word aligner
unsupervised semantic parsing we present the first unsupervised approach to the problem of learning a semantic parser using markov logic our usp system transforms dependency trees into quasilogical forms recursively induces lambda forms from these and clusters them to abstract away syntactic variations of the same meaning the map semantic parse of a sentence is obtained by recursively assigning its parts to lambdaform clusters and composing them we evaluate our approach by using it to extract a knowledge base from biomedical abstracts and answer questions usp substantially outperforms textrunner dirt and an informed baseline on both precision and recall on this task semantic parsing maps text to formal meaning representationsthis contrasts with semantic role labeling and other forms of shallow semantic processing which do not aim to produce complete formal meaningstraditionally semantic parsers were constructed manually but this is too costly and brittlerecently a number of machine learning approaches have been proposed however they are supervised and providing the target logical form for each sentence is costly and difficult to do consistently and with high qualityunsupervised approaches have been applied to shallow semantic tasks information extraction but not to semantic parsingin this paper we develop the first unsupervised approach to semantic parsing using markov logic our usp system starts by clustering tokens of the same type and then recursively clusters expressions whose subexpressions belong to the same clustersexperiments on a biomedical corpus show that this approach is able to successfully translate syntactic variations into a logical representation of their common meaning this in turn allows it to correctly answer many more questions than systems based on textrunner and dirt we begin by reviewing the necessary background on semantic parsing and markov logicwe then describe our markov logic network for unsupervised semantic parsing and the learning and inference algorithms we usedfinally we present our experiments and resultsthe standard language for formal meaning representation is firstorder logica term is any expression representing an object in the domainan atomic formula or atom is a predicate symbol applied to a tuple of termsformulas are recursively constructed from atomic formulas using logical connectives and quantifiersa lexical entry defines the logical form for a lexical item the semantic parse of a sentence is derived by starting with logical forms in the lexical entries and recursively composing the meaning of larger fragments from their partsin traditional approaches the lexical entries and meaningcomposition rules are both manually constructedbelow are sample rules in a definite clause grammar for parsing the sentence utah borders idahothe first three lines are lexical entriesthey are fired upon seeing the individual wordsfor example the first rule applies to the word borders and generates syntactic category verb with the meaning ayaxborders that represents the nextto relationhere we use the standard lambdacalculus notation where ayaxborders represents a function that is true for any pair such that borders holdsthe last two rules compose the meanings of subparts into that of the larger partfor example after the first and third rules are fired the fourth rule fires and generates vpayaxborders this meaning simplifies to axborders by the areduction rule which substitutes the argument for a variable in a functional applicationa major challenge to semantic parsing is syntactic variations of the same meaning which abound in natural languagesfor example the aforementioned sentence can be rephrased as utah is next to idahoutah shares a border with idaho etcmanually encoding all these variations into the grammar is tedious and errorpronesupervised semantic parsing addresses this issue by learning to construct the grammar automatically from sample meaning annotations existing approaches differ in the meaning representation languages they use and the amount of annotation requiredin the approach of zettlemoyer and collins the training data consists of sentences paired with their meanings in lambda forma probabilistic combinatory categorial grammar is learned using a loglinear model where the probability of the final logical form l and meaningderivation tree t conditioned on the sentence 5 is p ization constant and fi are the feature functions with weights wicandidate lexical entries are generated by a domainspecific procedure based on the target logical formsthe major limitation of supervised approaches is that they require meaning annotations for example sentenceseven in a restricted domain doing this consistently and with high quality requires nontrivial effortfor unrestricted text the complexity and subjectivity of annotation render it essentially infeasible even prespecifying the target predicates and objects is very difficulttherefore to apply semantic parsing beyond limited domains it is crucial to develop unsupervised methods that do not rely on labeled meaningsin the past unsupervised approaches have been applied to some semantic tasks but not to semantic parsingfor example dirt learns paraphrases of binary relations based on distributional similarity of their arguments textrunner automatically extracts relational triples in open domains using a selftrained extractor sne applies relational clustering to generate a semantic network from textrunner triples while these systems illustrate the promise of unsupervised methods the semantic content they extract is nonetheless shallow and does not constitute the complete formal meaning that can be obtained by a semantic parseranother issue is that existing approaches to semantic parsing learn to parse syntax and semantics together1 the drawback is that the complexity in syntactic processing is coupled with semantic parsing and makes the latter even harderfor example when applying their approach to a different domain with somewhat less rigid syntax zettlemoyer and collins need to introduce new combinators and new forms of candidate lexical entriesideally we should leverage the enormous progress made in syntactic parsing and generate semantic parses directly from syntactic analysisin many nlp applications there exist rich relations among objects and recent work in statistical relational learning and structured prediction has shown that leveraging these can greatly improve accuracyone of the most powerful representations for this is markov logic which is a probabilistic extension of firstorder logic markov logic makes it possible to compactly specify probability distributions over complex relational domains and has been successfully applied to unsupervised coreference resolution and other tasksa markov logic network is a set of weighted firstorder clausestogether with a set of constants it defines a markov network with one node per ground atom and one feature per ground clausethe weight of a feature is the weight of the firstorder clause that originated itthe probability of a state x in such a network is given by the loglinear model p constant wi is the weight of the ith formula and ni is the number of satisfied groundingsunsupervised semantic parsing rests on three key ideasfirst the target predicate and object constants which are prespecified in supervised semantic parsing can be viewed as clusters of syntactic variations of the same meaning and can be learned from datafor example borders represents the nextto relation and can be viewed as the cluster of different forms for expressing this relation such as borders is next to share the border with utah represents the state of utah and can be viewed as the cluster of utah the beehive state etcsecond the identification and clustering of candidate forms are integrated with the learning for meaning composition where forms that are used in composition with the same forms are encouraged to cluster together and so are forms that are composed of the same subformsthis amounts to a novel form of relational clustering where clustering is done not just on fixed elements in relational tuples but on arbitrary forms that are built up recursivelythird while most existing approaches learn to parse both syntax and semantics unsupervised semantic parsing starts directly from syntactic analyses and focuses solely on translating them to semantic contentthis enables us to leverage advanced syntactic parsers and the available rich resources for themmore importantly it separates the complexity in syntactic analysis from the semantic one and makes the latter much easier to performin particular meaning composition does not require domainspecific procedures for generating candidate lexicons as is often needed by supervised methodsthe input to our usp system consists of dependency trees of training sentencescompared to phrasestructure syntax dependency trees are the more appropriate starting point for semantic processing as they already exhibit much of the relationargument structure at the lexical levelusp first uses a deterministic procedure to convert dependency trees into quasilogical forms the qlfs and their subformulas have natural lambda forms as will be described laterstarting with clusters of lambda forms at the atom level usp recursively builds up clusters of larger lambda formsthe final output is a probability distribution over lambdaform clusters and their compositions as well as the map semantic parses of training sentencesin the remainder of the section we describe the details of uspwe first present the procedure for generating qlfs from dependency treeswe then introduce their lambda forms and clusters and show how semantic parsing works in this settingfinally we present the markov logic network used by uspin the next sections we present efficient algorithms for learning and inference with this mlna dependency tree is a tree where nodes are words and edges are dependency labelsto derive the qlf we convert each node to an unary atom with the predicate being the lemma plus pos tag and each edge to a binary atom with the predicate being the dependency labelfor example the node for utah becomes utah and the subject dependency becomes nsubjhere the ni are skolem constants indexed by the nodesthe qlf for a sentence is the conjunction of the atoms for the nodes and edges eg the sentence above will become borders utah idaho nsubj dobjgiven a qlf a relation or an object is represented by the conjunction of a subset of the atomsfor example the nextto relation is represented by borders nsubj dobj and the states of utah and idaho are represented by utah and idahothe meaning composition of two subformulas is simply their conjunctionthis allows the maximum flexibility in learningin particular lexical entries are no longer limited to be adjacent words as in zettlemoyer and collins but can be arbitrary fragments in a dependency treefor every subformula f we define a corresponding lambda form that can be derived by replacing every skolem constant ni that does not appear in any unary atom in f with a unique lambda variable xiintuitively such constants represent objects introduced somewhere else and correspond to the arguments of the relation represented by f for example the lambda form for borders nsubj dobj is ax2ax3 borders nsubj dobjconceptually a lambdaform cluster is a set of semantically interchangeable lambda formsfor example to express the meaning that utah borders idaho we can use any form in the cluster representing the nextto relation any form in the cluster representing the state of utah and any form in the cluster representing the state of idaho conditioned on the clusters the choices of individual lambda forms are independent of each otherto handle variable number of arguments we follow davidsonian semantics and further decompose a lambda form into the core form which does not contain any lambda variable and the argument forms which contain a single lambda variable and ax3dobjeach lambdaform cluster may contain some number of argument types which cluster distinct forms of the same argument in a relationfor example in stanford dependencies the object of a verb uses the dependency dobj in the active voice but nsubjpass in passivelambdaform clusters abstract away syntactic variations of the same meaninggiven an instance of cluster t with arguments of argument types a1 ak its abstract lambda form is given by ax1 axkt mi1 aigiven a sentence and its qlf semantic parsing amounts to partitioning the atoms in the qlf dividing each part into core form and argument forms and then assigning each form to a cluster or an argument typethe final logical form is derived by composing the abstract lambda forms of the parts using the areduction rule2 formally for a qlf q a semantic parse l partitions q into parts p1 p2 pn each part p is assigned to some lambdaform cluster c and is further partitioned into core form f and argument forms f1 fk each argument form is assigned to an argument type a in c the usp mln defines a joint probability distribution over q and l by modeling the distributions over forms and arguments given the cluster or argument typebefore presenting the predicates and formulas in our mln we should emphasize that they should not be confused with the atoms and formulas in the qlfs which are represented by reified constants and variablesto model distributions over lambda forms we introduce the predicates form and argform where p is a part i is the index of an argument and f is a qlf subformulaform is true iff part p has core form f and argform is true iff the ith argument in p has form f3 the f notation signifies that each part or argument can have only one formto model distributions over arguments we introduce three more predicates argtype signifies that the ith argument of p is assigned to argument type a arg signifies that the ith argument of p is p number signifies that there are n arguments of p that are assigned to type athe truth value of number is determined by the argtype atomsunsupervised semantic parsing can be captured by four formulas all free variables are implicitly universally quantifiedthe notation signifies that the mln contains an instance of the formula with a separate weight for each value combination of the 2currently we do not handle quantifier scoping or semantics for specific closedclass words such as determinersthese will be pursued in future work variables with a plus signthe first formula models the mixture of core forms given the cluster and the others model the mixtures of argument forms argument types and argument numbers respectively given the argument typeto encourage clustering and avoid overfitting we impose an exponential prior with weight α on the number of parameters4 the mln above has one problem it often clusters expressions that are semantically oppositefor example it clusters antonyms like elderlyyoung matureimmaturethis issue also occurs in other semanticprocessing systems in general this is a difficult open problem that only recently has started to receive some attention resolving this is not the focus of this paper but we describe a general heuristic for fixing this problemwe observe that the problem stems from the lack of negative features for discovering meanings in contrastin natural languages parallel structures like conjunctions are one such feature5 we thus introduce an exponential prior with weight β on the number of conjunctions where the two conjunctive parts are assigned to the same clusterto detect conjunction we simply used the stanford dependencies that begin with conjthis proves very effective fixing the majority of the errors in our experimentsgiven a sentence and the quasilogical form q derived from its dependency tree the conditional probability for a semantic parse l is given by pr a exp the map semantic parse is simply arg maxl winienumerating all ls is intractableit is also unnecessary since most partitions will result in parts whose lambda forms have no cluster they can be assigned toinstead usp uses a greedy algorithm to search for the map parsefirst we introduce some definitions a partition is called areducible from p if it can be obtained from the current partition by recursively areducing the part containing p with one of its arguments such a partition is algorithm 1 uspparse form parts for individual atoms in qlf and assign each to its most probable cluster repeat for all parts p in the current partition do for all partitions that are areducible from p and feasible do find the most probable cluster and argument type assignments for the new part change to the new partition and assignments with the highest gain in probability until none of these improve the probability return current partition and assignments called feasible if the core form of the new part is contained in some clusterfor example consider the qlf of utah borders idaho and assume that the current partition is ax2x3borders nsubj dobj utah idahothen the following partition is areducible from the first part in the above partition ax3borders nsubj utah dobj idahowhether this new partition is feasible depends on whether the core form of the new part ax3borders nsubj utah dobj nsubj utah is contained in some lambdaform clusteralgorithm 1 gives pseudocode for our algorithmgiven part p finding partitions that are areducible from p and feasible can be done in time o where 5 is the size of the clustering in the number of core forms and t is the maximum number of atoms in a core formwe omit the proof here but point out that it is related to the unordered subtree matching problem which can be solved in linear time inverted indexes are used to further improve the efficiencyfor a new part p and a cluster that contains ps core form there are km ways of assigning ps m arguments to the k argument types of the clusterfor larger k and m this is very expensivewe therefore approximate it by assigning each argument to the best type independent of other argumentsthis algorithm is very efficient and is used repeatedly in learningthe learning problem in usp is to maximize the loglikelihood of observing the qlfs obtained from the dependency trees denoted by q summing out the unobserved semantic parses here l are the semantic parses 0 are the mln parameters and pb are the completion likelihoodsa serious challenge in unsupervised learning is the identifiability problem this problem is particularly severe for loglinear models with hard constraints which are common in mlnsfor example in our usp mln conditioned on the fact that p e c there is exactly one value of f that can satisfy the formula p e c n form and if we add some constant number to the weights of p e c n form for all f the probability distribution stays the same6 the learner can be easily confused by the infinitely many optima especially in the early stagesto address this problem we impose local normalization constraints on specific groups of formulas that are mutually exclusive and exhaustive ie in each group we require that eki1 ewi 1 where wi are the weights of formulas in the groupgrouping is done in such a way as to encourage the intended mixture behaviorsspecifically for the rule p e c n form all instances given a fixed c form a group for each of the remaining three rules all instances given a fixed a form a groupnotice that with these constraints the completion likelihood p can be computed in closed form for any l in particular each formula group contributes a term equal to the weight of the currently satisfied formulain addition the optimal weights that maximize the completion likelihood p can be derived in closed form using empirical relative frequencieseg the optimal weight of p e c n form is log where ncf is the number of parts p that satisfy both p e c and form and nc is the number of parts p that satisfy p e c7 we leverage this fact for efficient learning in usp6regularizations eg gaussian priors on weights alleviate this problem by penalizing large weights but it remains true that weights within a short range are roughly equivalent and update agenda and candidate operations until agenda is empty return the mln with learned weights and the semantic parses another major challenge in usp learning is the summation in the likelihood which is over all possible semantic parses for a given dependency treeeven an efficient sampler like mcsat as used in poon domingos would have a hard time generating accurate estimates within a reasonable amount of timeon the other hand as already noted in the previous section the lambdaform distribution is generally sparselarge lambdaforms are rare as they correspond to complex expressions that are often decomposable into smaller onesmoreover while ambiguities are present at the lexical level they quickly diminish when more words are presenttherefore a lambda form can usually only belong to a small number of clusters if not a unique onewe thus simplify the problem by approximating the sum with the mode and search instead for the l and 0 that maximize log pbsince the optimal weights and loglikelihood can be derived in closed form given the semantic parses l we simply search over semantic parses evaluating them using loglikelihoodalgorithm 2 gives pseudocode for our algorithmthe input consists of an mln without weights and the qlfs for the training sentencestwo operators are used for updating semantic parsesthe first is to merge two clusters denoted by merge for clusters c1 c2 which does the following and there is the local normalization constraint ef ewcf 1the optimal weights wcf are easily derived by solving this constrained optimization problemhere merging two argument types refers to pooling their argument forms to create a new argument typeenumerating all possible ways of creating new argument types is intractableusp approximates it by considering one type at a time and either creating a new type for it or merging it to types already considered whichever maximizes the loglikelihoodthe types are considered in decreasing order of their numbers of occurrences so that more information is available for each decisionmerge clusters syntactically different expressions whose meanings appear to be the same according to the modelthe second operator is to create a new cluster by composing two existing ones denoted by compose which does the following compose creates clusters of large lambdaforms if they tend to be composed of the same subforms these lambdaforms may later be merged with other clusters at learning time usp maintains an agenda that contains operations that have been evaluated and are pending executionduring initialization usp forms a part and creates a new cluster for each unary atom youit also assigns binary atoms of the form b to the part as argument forms and creates a new argument type for eachthis forms the initial clustering and semantic parsesusp then merges clusters with the same core form using merge8 at each step usp evaluates the candidate operations and adds them to the agenda if the improvement is 8wordsense disambiguation can be handled by including a new kind of operator that splits a cluster into subclusterswe leave this to future work above a threshold9 the operation with the highest score is executed and the parameters are updated with the new optimal valuesthe qlfs which contain an affected part are reparsed and operations in the agenda whose score might be affected are reevaluatedthese changes are done very efficiently using inverted indexeswe omit the details here due to space limitationsusp terminates when the agenda is empty and outputs the current mln parameters and semantic parsesusp learning uses the same optimization objective as hard them and is also guaranteed to find a local optimum since at each step it improves the loglikelihoodit differs from them in directly optimizing the likelihood instead of a lower boundevaluating unsupervised semantic parsers is difficult because there is no predefined formal language or gold logical forms for the input sentencesthus the best way to test them is by using them for the ultimate goal answering questions based on the input corpusin this paper we applied usp to extracting knowledge from biomedical abstracts and evaluated its performance in answering a set of questions that simulate the information needs of biomedical researcherswe used the genia dataset as the source for knowledge extractionit contains 1999 pubmed abstracts and marks all mentions of biomedical entities according to the genia ontology such as cell protein and dnaas a first approximation to the questions a biomedical researcher might ask we generated a set of two thousand questions on relations between entitiessample questions are what regulates mip1alpha what does antistat 1 inhibitto simulate the real information need we sample the relations from the 100 most frequently used verbs and sample the entities from those annotated in genia both according to their numbers of occurrenceswe evaluated usp by the number of answers it provided and the accuracy as determined by manual labeling10 since usp is the first unsupervised semantic parser conducting a meaningful comparison of it with other systems is not straightforwardstandard questionanswering benchmarks do not provide the most appropriate comparison because they tend to simultaneously emphasize other aspects not directly related to semantic parsingmoreover most stateoftheart qa systems use supervised learning in their key components andor require domainspecific engineering effortsthe closest available system to usp in aims and capabilities is textrunner and we compare with ittextrunner is the stateoftheart system for opendomain information extraction its goal is to extract knowledge from text without using supervised labelsgiven that a central challenge to semantic parsing is resolving syntactic variations of the same meaning we also compare with resolver a stateoftheart unsupervised system based on textrunner for jointly resolving entities and relations and dirt which resolves paraphrases of binary relationsfinally we also compared to an informed baseline based on keyword matchingkeyword we consider a baseline system based on keyword matchingthe question substring containing the verb and the available argument is directly matched with the input text ignoring case and morphologywe consider two ways to derive the answer given a matchthe first one simply returns the rest of sentence on the other side of the verbthe second one is informed by syntax the answer is extracted from the subject or object of the verb depending on the questionif the verb does not contain the expected argument the sentence is ignoredtextrunner textrunner inputs text and outputs relational triples in the form where r is the relation string and a1 a2 the argument stringsgiven a triple and a question we first match their relation strings and then match the strings for the argument that is present in the questionif both match we return the other argument string in the triple as an answerwe report results when exact match is used or when the triple string can contain the question one as a substring resolver resolver inputs textrunner triples and collectively resolves coreferent relation and argument stringson the genia data using the default parameters resolver produces only a few trivial relation clusters and no argument clustersthis is not surprising since resolver assumes high redundancy in the data and will discard any strings with fewer than 25 extractionsfor a fair comparison we also ran resolver using all extractions and manually tuned the parameters based on eyeballing of clustering qualitythe best result was obtained with 25 rounds of execution and with the entity multiple set to 200 to answer questions the only difference from textrunner is that a question string can match any string in its clusteras in textrunner we report results for both exact match and substring dirt the dirt system inputs a path and returns a set of similar pathsto use dirt in question answering we queried it to obtain similar paths for the relation of the question and used these paths while matching sentenceswe first used minipar to parse input text using the same dependencies as dirtto determine a match we first check if the sentence contains the question path or one of its dirt pathsif so and if the available argument slot in the question is contained in the one in the sentence it is a match and we return the other argument slot from the sentence if it is presentideally a fair comparison will require running dirt on the genia text but we were not able to obtain the source codewe thus resorted to using the latest dirt database released by the author which contains paths extracted from a large corpus with more than 1gb of textthis puts dirt in a very advantageous position compared with other systemsin our experiments we used the top three similar paths as including more results in very low precisionusp we built a system for knowledge extraction and question answering on top of uspit generated stanford dependencies from the input text using the stanford parser and then fed these to usplearn11 which produced an mln with learned weights and the map semantic parses of the input sentencesthese map parses formed our knowledge base to answer questions the system first parses the questions12 using uspparse with the learned mln and then matches the question parse to parses in the kb by testing subsumption when a match occurs our system then looks for arguments of type in accordance with the questionfor example if the question is what regulates mip1alpha it searches for the argument type of the relation that contains the argument form nsubj for subjectif such an argument exists for the relation part it will be returned as the answertable 1 shows the results for all systemsusp extracted the highest number of answers almost doubling that of the second highest it obtained the highest accuracy at 88 and the number of correct answers it extracted is three times that of the second highest systemthe informed baseline did surprisingly well compared to systems other than usp in terms of accuracy and number of correct answerstextrunner achieved good accuracy when exact match is used but only obtained a fraction of the answers compared to uspwith substring match its recall substantially improved but precision dropped more than 20 pointsresolver improved the number of extracted answers by sanctioning more matches based on the clusters it generatedhowever most of those additional answers are incorrect due to wrong clusteringdirt obtained the second highest number of correct answers but its precision is quite low because the similar paths contain many errorsmanual inspection shows that usp is able to resolve many nontrivial syntactic variations without user supervisionit consistently resolves the syntactic difference between active and passive voicesit successfully identifies many distinct argument forms that mean the same it also resolves many nouns correctly and forms meaningful groups of relationshere are some sample clusters in core forms investigate examine evaluate analyze study assay diminish reduce decrease attenuate synthesis production secretion release dramatically substantially significantly an example questionanswer pair together with the source sentence is shown below q what does il13 enhancea the 12lipoxygenase activity of murine macrophagessentence the data presented here indicate that the 12lipoxygenase activity of murine macrophages is upregulated in vitro and in vivo by il4 andor il13 this paper introduces the first unsupervised approach to learning semantic parsersour usp system is based on markov logic and recursively clusters expressions to abstract away syntactic variations of the same meaningwe have successfully applied usp to extracting a knowledge base from biomedical text and answering questions based on itdirections for future work include better handling of antonyms subsumption relations among expressions quantifier scoping more complex lambda forms etc use of context and discourse to aid expression clustering and semantic parsing more efficient learning and inference application to larger corpora etcwe thank the anonymous reviewers for their commentsthis research was partly funded by aro grant w911nf0810242 darpa contracts fa87500520283 fa875007d0185 hr001106c0025 hr001107c0060 and nbchd030010 nsf grants iis0534881 and iis0803481 and onr grant n000140810670the views and conclusions contained in this document are those of the authors and should not be interpreted as necessarily representing the official policies either expressed or implied of aro darpa nsf onr or the united states government
D09-1001
unsupervised semantic parsingwe present the first unsupervised approach to the problem of learning a semantic parser using markov logicour usp system transforms dependency trees into quasilogical forms recursively induces lambda forms from these and clusters them to abstract away syntactic variations of the same meaningthe map semantic parse of a sentence is obtained by recursively assigning its parts to lambdaform clusters and composing themwe evaluate our approach by using it to extract a knowledge base from biomedical abstracts and answer questionsusp substantially outperforms textrunner dirt and an informed baseline on both precision and recall on this taskwe consider a semantic parsing setting where the goal is to decompose the syntactic dependency tree of a sentence into fragments assign each of these fragments to a cluster of semantically equivalent syntactic structures and predict predicateargument relations between the fragmentswe model joint probability of the dependency tree and its latent semantic representation using markov logic networks selecting parameters to maximize the probability of the observed dependency structureswe group parameters and impose local normalization constraints within each group
first and secondorder expectation semirings with applications to minimumrisk training on translation forests many statistical translation models can be regarded as weighted logical deduction under this paradigm we use weights from the expectation semiring to compute firstorder statistics over packed forests of translations we then introduce novel semiring which computes secondorder statistics this secondorder semiring is essential for many interesting training paradigms such as minimum risk deterministic annealing active learning and semisupervised learning where gradient descent optimization requires computing the gradient of entropy or risk we use these semirings in an opensource machine translation toolkit enabling minimumrisk training a benefit of up to 10 a hypergraph or packed forest is a compact data structure that uses structuresharing to represent exponentially many trees in polynomial spacea weighted hypergraph also defines a probability or other weight for each tree and can be used to represent the hypothesis space considered by a monolingual parser or a treebased translation system eg tree to string string to tree tree to tree or string to string with latent tree structures given a hypergraph we are often interested in computing some quantities over it using dynamic programming algorithmsfor example we may want to run the viterbi algorithm to find the most probable derivation tree in the hypergraph or the k most probable treessemiringweighted logic programming is a general framework to specify these algorithms goodman describes many useful semirings while most of these semirings are used in testing we are mainly interested in the semirings that are useful for training the expectation semiring originally proposed for finitestate machines is one such training semiring and can be used to compute feature expectations for the estep of the them algorithm or gradients of the likelihood function for gradient descentin this paper we apply the expectation semiring to a hypergraph rather than just a latticewe then propose a novel secondorder expectation semiring nicknamed the variance semiring the original firstorder expectation semiring allows us to efficiently compute a vector of firstorder statistics on the set of paths in a lattice or the set of trees in a hypergraphthe secondorder expectation semiring additionally computes a matrix of secondorder statistics derivatives of expectationswe present details on how to compute many interesting quantities over the hypergraph using the expectation and variance semiringsthese quantities include expected hypothesis length feature expectation entropy crossentropy kullbackleibler divergence bayes risk variance of hypothesis length gradient of entropy and bayes risk covariance and hessian matrix and so onthe variance semiring is essential for many interesting training paradigms such as deterministic annealing minimum risk active and semisupervised learning in these settings we must compute the gradient of entropy or riskthe semirings can also be used for secondorder gradient optimization algorithmswe implement the expectation and variance semirings in joshua and demonstrate their practical benefit by using minimumrisk training to improve hiero we use a specific treebased system called hiero as an example although the discussion is general for any systems that use a hypergraph to represent the hypothesis spacein hiero a synchronous contextfree grammar is extracted from automatically wordaligned corporaan illustrative grammar rule for chinesetoenglish translation is where the chinese word in means of and the alignment encoded via subscripts on the nonterminals causes the two phrases around in to be reordered around of in the translationgiven a source sentence hiero uses a cky parser to generate a hypergraph encoding many derivation trees along with the translation stringsformally a hypergraph is a pair where v is a set of nodes and e is a set of hyperedges with each hyperedge connecting a set of antecedent nodes to a single consequent node1 in parsing parlance a node corresponds to an item in the chart the root node corresponds to the goal itema hyperedge represents an scfg rule that has been instantiated at a particular position so that the nonterminals on the right and left sides have been replaced by particular antecedent and consequent items this corresponds to storage of backpointers in the chartwe write t to denote the set of antecedent nodes of a hyperedge e we write i for the hypergraph a trigram language model is integratedrectangles represent items where each item is identified by the nonterminal symbol source span and left and rightside language model statesan item has one or more incoming hyperedgesa hyperedge consists of a rule and a pointer to an antecedent item for each nonterminal symbol in the rule set of incoming hyperedges of node v which represent different ways of deriving v figure 1 shows a simple hierostyle hypergraphthe hypergraph encodes four different derivation trees that share some of the same itemsby exploiting this sharing a hypergraph can compactly represent exponentially many treeswe observe that any finitestate automaton can also be encoded as a hypergraph thus the methods of this paper apply directly to the simpler case of hypothesis lattices as wellwe assume a hypergraph hg which compactly encodes many derivation trees d e d given hg we wish to extract the best derivationsor other aggregate properties of the forest of derivationssemiring parsing is a general framework to describe such algorithmsto define a particular algorithm we choose a semiring k and specify a weight ke e k for each hyperedge e the desired aggregate result then emerges as the total weight of all derivations in the hypergraphfor example to simply count derivations one can assign every hyperedge weight 1 in the semiring of ordinary integers then each derivation also has weight 1 and their total weight is the number of derivationswe write k for a semiring with elements k additive operation multiplicative operation additive identity 0 and multiplicative identity 1the operation is used to obtain the weight of each derivation d by multiplying the weights of its component hyperedges e that is kd eed kethe operation is used to sum over all derivations d in the hypergraph to obtain the total weight of the hypergraph hg which is eed ke2 figure 2 shows how to ded compute the total weight of an acyclic hypergraph hg3 in general the total weight is a sum over exponentially many derivations d but figure 2 sums over these derivations in time only linear on the size of the hypergraphits correctness relies on axiomatic properties of the semiring namely is associative and commutative with identity 0 is associative with twosided identity 1 and distributes over from both sidesthe distributive property is what makes figure 2 workthe other properties are necessary to ensure that the algorithm in figure 2 is general and can be applied with any semiring below we present our novel semiringswe now introduce the computational problems of this paper and the semirings we use to solve themwe are given a function p d r0 which decomposes multiplicatively over component hyperedges e of a derivation d d that is p def eed pein practice p will specify a probability distribution over the derivations in the hyper2eisner uses closed semirings that are also equipped with a kleene closure operator for example in the real semiring we define p 1 for p 4 language model states this loss function is additively def decomposableusing re le where le is the loss for a hyperedge e we compute the expected loss with secondorder expectation semirings we can compute from a hypergraph the expectation and variance of hypothesis length the feature expectation vector and covariance matrix the hessian of z and the gradients of entropy and expected lossthe computations should be clear from earlier discussionbelow we compute gradient of entropy or bayes riskgradient of entropy or risk it is easy to see that the gradient of entropy is we may compute as explained in case 3 of section 5 by using defdef ke re pevre vpe where vpe depends on the particular parameterization of the model similarly the gradient of risk of is we may compute using ke we now show how we improve the training of a hiero mt model by optimizing an objective function that includes entropy and riskour objective function could be computed with a firstorder expectation semiring but computing it along with its gradient requires a secondorder onewe assume a globally normalized linear model for its simplicityeach derivation d is scored by where 4b e ri is a vector of features of d we then define the unnormalized distribution p as where the scale factor γ adjusts how sharply the distribution favors the highestscoring hypothesesadjusting θ or γ changes the distribution p minimum error rate training tries to tune θ to minimize the bleu loss of a decoder that chooses the most probable output according to p merts specialized linesearch addresses the problem that this objective function is piecewise constant but it does not scale to a large number of parameterssmith and eisner instead propose a differentiable objective that can be optimized by gradient descent the bayes risk r of this is the expected loss if one were to use a randomized decoder which chooses a hypothesis d in proportion to its probability pif entropy h is large the bayes risk is smooth and has few local minimathus smith and eisner try to avoid local minima by starting with large h and decreasing it gradually during optimizationthis is called deterministic annealing as h 0 the bayes risk does approach the mert objective the objective is minimize are t h where the temperature t starts high and is explicitly decreased as optimization proceedssolving for a given t requires computing the entropy h and risk r and their gradients with respect to θ and γ smith and eisner followed mert in constraining their decoder to only an nbest list so for them computing these quantities did not involve dynamic programmingwe compare those methods to training on a hypergraph containing exponentially many hypothesesin this condition we need our new secondorder semiring methods and must also approximate bleu by an additively decomposable loss 15 our algorithms require that p of is multiplicatively decomposableit suffices to define 4b def eed 4be so that all features are local to individual hyperedges the vector 4be indicates which features fire on hyperedge e then score of is additively decomposable we can then set pe exp and vpe γpe4b and use the algorithms described in section 6 to compute h and r and their gradients with respect to θ and γ16 15pauls et al concurrently developed a method to maximize the expected ngram counts on a hypergraph using gradient descenttheir objective is similar to the minimum risk objective and their gradient descent optimization involves in algorithms in computing expected featurengram counts as well as expected products of features and ngram counts which can be viewed as instances of our general algorithms with first and secondorder semiringsthey focused on tuning only a small number of features as in a regular mert setting while our experiments involve both a small and a large number of features16it is easy to verify that the gradient of a function f with respect to γ can be written as a weighted sum of gradients with respect to the feature weights θi iewe built a translation model on a corpus for iwslt 2005 chinesetoenglish translation task which consists of 40k pairs of sentenceswe used a 5gram language model with modified kneserney smoothing trained on the bitexts english using srilm we first investigate how minimumrisk training with and without deterministic annealing performs compared to regular mertmr without da just fixes t 0 and γ 1 in all mr or mrda uses an approximated bleu while mert uses the exact corpus bleu in trainingthe first five rows in table 5 present the results by tuning the weights offive features we observe that mr or mrda performs worse than mert on the dev setthis may be mainly because mr or mrda uses an approximated bleu while mert does noton the test set mr or mrda on an nbest list is comparable to mertbut our new approach mr or mrda on a hypergraph does consistently better than mert despite approximating bleu17 did da helpfor both nbest and hypergraph mrda did obtain a better bleu score than plain mr on the dev set18 this shows that da helps with the local minimum problem as hopedhowever das improvement on the dev set did not transfer to the test setmr is scalable to tune a large number of features while mert is notto achieve competitive performance we adopt a forest reranking approach specifically our training has two stagesin the first stage we train a baseline system as usualwe also find the optimal feature weights for the five features mentioned before using the method of mrda operating on a hypergraphin the second stage we generate a hypergraph for each sentence in the training data using the baseline training scenariosin the small model five features are tunedin the large model 21k additional unigram and bigram features are used systemin this stage we add 21k additional unigram and bigram targetside language model features for example a specific bigram the cat can be a featurenote that the total score by the baseline system is also a feature in the secondstage modelwith these features and the 40k hypergraphs we run the mr training to obtain the optimal weightsduring test time a similar procedure is followedfor a given test sentence the baseline system first generates a hypergraph and then the hypergraph is reranked by the secondstage modelthe last row in table 5 reports the bleu scoresclearly adding more features improves the case with only five featureswe plan to incorporate more informative features described by chiang et al 19we presented firstorder expectation semirings and insideoutside computation in more detail than and developed extensions to higherorder expectation semiringsthis enables efficient computation of many interesting quantities over the exponentially many derivations encoded in a hypergraph second derivatives expectations of products and expectations such as risk and entropy along with their derivativesto our knowledge algorithms for these problems have not been presented beforeour approach is theoretically elegant like other work in this vein we used it practically to enable a new form of minimumrisk training that improved chineseenglish mt by 10 bleu pointour implementation will be released within the opensource mt toolkit joshua
D09-1005
first and secondorder expectation semirings with applications to minimumrisk training on translation forestsmany statistical translation models can be regarded as weighted logical deductionunder this paradigm we use weights from the expectation semiring to compute firstorder statistics over packed forests of translations we then introduce a novel secondorder expectation semiring which computes secondorder statistics this secondorder semiring is essential for many interesting training paradigms such as minimum risk deterministic annealing active learning and semisupervised learning where gradient descent optimization requires computing the gradient of entropy or riskwe use these semirings in an opensource machine translation toolkit joshua enabling minimumrisk training for a benefit of up to 10 bleu pointwe consider minimum risk training using a linearly decomposable approximation of bleuthe sufficient statistics for graph expected bleu can be computed using expectation semiringswe extend the work of smith and eisner and obtain much better estimates of feature expectations by using a packed chart instead of an nbest listwe perform expected bleu training with deterministic annealing on translation forests generated by hiero
labeled lda a supervised topic model for credit attribution in multilabeled corpora a significant portion of the worlds text is tagged by readers on social bookmarkwebsites attribution an inherent problem in these corpora because most pages have multiple tags but the tags do not always apply with equal specificity across the whole document solving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versa this introduces a topic model that constrains latent dirichlet allocation by defining a onetoone correspondence between ldas latent topics and user tags this allows labeled lda to directly learn wordtag correspondences we demonstrate labeled ldas improved expressiveness over traditional lda with visualizations of a corpus of tagged web from labeled lda outperforms svms by more than 3 to 1 when extracting tagspecific document snippets as a multilabel text classifier our model is competitive with a discriminative baseline on a variety of datasets from news sources such as reuters to modern community web portals like delicious a significant proportion of the worlds textual data is labeled with multiple humanprovided tagsthese collections reflect the fact that documents are often about more than one thingfor example a news story about a highway transportation bill might naturally be filed under both transportation and politics with neither category acting as a clear subset of the othersimilarly a single web page in delicious might well be annotated with tags as diverse as arts physics alaska and beautyhowever not all tags apply with equal specificity across the whole document opening up new opportunities for information retrieval and corpus analysis on tagged corporafor instance users who browse for documents with a particular tag might prefer to see summaries that focus on the portion of the document most relevant to the tag a task we call tagspecific snippet extractionand when a user browses to a particular document a tagaugmented user interface might provide overview visualization cues highlighting which portions of the document are more or less relevant to the tag helping the user quickly access the information they seekone simple approach to these challenges can be found in models that explicitly address the credit attribution problem by associating individual words in a document with their most appropriate labelsfor instance in our news story about the transportation bill if the model knew that the word highway went with transportation and that the word politicians went with politics more relevant passages could be extracted for either labelwe seek an approach that can automatically learn the posterior distribution of each word in a document conditioned on the documents label setone promising approach to the credit attribution problem lies in the machinery of latent dirichlet allocation a recent model that has gained popularity among theoreticians and practitioners alike as a tool for automatic corpus summarization and visualizationlda is a completely unsupervised algorithm that models each document as a mixture of topicsthe model generates automatic summaries of topics in terms of a discrete probability distribution over words for each topic and further infers perdocument discrete distributions over topicsmost importantly lda makes the explicit assumption that each word is generated from one underlying topicalthough lda is expressive enough to model multiple topics per document it is not appropriate for multilabeled corpora because as an unsupervised model it offers no obvious way of incorporating a supervised label set into its learning procedurein particular lda often learns some topics that are hard to interpret and the model provides no tools for tuning the generated topics to suit an enduse application even when time and resources exist to provide some document labelsseveral modifications of lda to incorporate supervision have been proposed in the literaturetwo such models supervised lda and disclda are inappropriate for multiply labeled corpora because they limit a document to being associated with only a single labelsupervised lda posits that a label is generated from each documents empirical topic mixture distributiondisclda associates a single categorical label variable with each document and associates a topic mixture with each labela third model mmlda is not constrained to one label per document because it models each document as a bag of words with a bag of labels with topics for each observation drawn from a shared topic distributionbut like the other models mmldas learned topics do not correspond directly with the label setconsequently these models fall short as a solution to the credit attribution problembecause labels have meaning to the people that assigned them a simple solution to the credit attribution problem is to assign a documents words to its labels rather than to a latent and possibly less interpretable semantic spacethis paper presents labeled lda a generative model for multiply labeled corpora that marries the multilabel supervision common to modern text datasets with the wordassignment ambiguity resolution of the lda family of modelsin contrast to standard lda and its existing supervised variants our model associates each label with one topic in direct correspondencein the following section llda is shown to be a natural extension of both lda and multinomial naive bayes we demonstrate that llda can go a long way toward solving the credit attribution problem in multiply labeled documents with improved interpretability over lda we show that lldas credit attribution ability enables it to greatly outperform supfigure 1 graphical model of labeled lda unlike standard lda both the label set λ as well as the topic prior α influence the topic mixture e port vector machines on a tagdriven snippet extraction task on web pages from delicious and despite its generative semantics we show that labeled lda is competitive with a strong baseline discriminative classifier on two multilabel text classification tasks labeled lda is a probabilistic graphical model that describes a process for generating a labeled document collectionlike latent dirichlet allocation labeled lda models each document as a mixture of underlying topics and generates each word from one topicunlike lda llda incorporates supervision by simply constraining the topic model to use only those topics that correspond to a documents label setthe model description that follows assumes the reader is familiar with the basic lda model let each document d be represented by a tuple consisting of a list of word indices w and a list of binary topic presenceabsence indicators λ where each wz e 1 v and each lk e 01here nd is the document length v is the vocabulary size and k the total number of unique labels in the corpuswe set the number of topics in labeled lda to be the number of unique labels k in the corpusthe generative process for the algorithm is found in table 1steps 1 and 2drawing the multinomial topic distributions over vocabulary 3k for each topic k from a dirichlet prior 77remain the same as for traditional lda page 4the traditional lda model then draws a multinomial mixture distribution e over all k topics for each document d from a dirichlet prior αhowever we would like to restrict e to be defined only over the topics that correspond to c3k is a vector consisting of the parameters of the multinomial distribution corresponding to the kth topic α are the parameters of the dirichlet topic prior and 77 are the parameters of the word prior while φk is the label prior for topic k for the meaning of the projection matrix l please refer to eq 1 its labels asince the wordtopic assignments zi are drawn from this distribution this restriction ensures that all the topic assignments are limited to the documents labelstowards this objective we first generate the documents labels a using a bernoulli coin toss for each topic k with a labeling prior probability φk as shown in step 5next we define the vector of documents labels to be x kλ k 1this allows us to define a documentspecific label projection matrix l of size md k for each document d where md x as follows for each row i 1 md and column j 1k in other words the ith row of l has an entry of 1 in column j if and only if the ith document label a i is equal to the topic j and zero otherwiseas the name indicates we use the l matrix to project the parameter vector of the dirichlet topic prior α t to a lower dimensional vector α as follows clearly the dimensions of the projected vector correspond to the topics represented by the labels of the documentfor example suppose k 4 and that a document d has labels given by a 011 0 which implies x 2 3 then l then 0 is drawn from a dirichlet distribution with parameters α l α t this fulfills our requirement that the documents topics are restricted to its own labelsthe projection step constitutes the deterministic step 6 in table 1the remaining part of the model from steps 7 through 10 are the same as for regular ldathe dependency of 0 on both α and a is indicated by directed edges from a and α to 0 in the plate notation in figure 1this is the only additional dependency we introduce in ldas representation in most applications discussed in this paper we will assume that the documents are multiply tagged with human labels both at learning and inference timewhen the labels a of the document are observed the labeling prior φ is dseparated from the rest of the model given ahence the model is same as traditional lda except the constraint that the topic prior α is now restricted to the set of labeled topics xtherefore we can use collapsed gibbs sampling for training where the sampling probability for a topic for position i in a document d in labeled lda is given by where nwi ij is the count of word wi in topic j that does not include the current assignment zi a missing subscript or superscript ij indicates a summation over that dimension and 1 is a vector of 1s of appropriate dimensionalthough the equation above looks exactly the same as that of lda we have an important distinction in that the target topic j is restricted to belong to the set of labels ie j xonce the topic multinomials c3 are learned from the training set one can perform inference on any new labeled test document using gibbs sampling restricted to its tags to determine its perword label assignments zin addition one can also compute its posterior distribution θ over topics by appropriately normalizing the topic assignments zit should now be apparent to the reader how the new model addresses some of the problems in multilabeled corpora that we highlighted in section 1for example since there is a onetoone correspondence between the labels and topics the model can display automatic topical summaries for each label k in terms of the topicspecific distribution βksimilarly since the model assigns a label zz to each word wz in the document d automatically we can now extract portions of the document relevant to each label k such that zz kin addition we can use the topic distribution θ to rank the user specified labels in the order of their relevance to the document thereby also eliminating spurious ones if necessaryfinally we note that other less restrictive variants of the proposed llda model are possiblefor example one could consider a version that allows topics that do not correspond to the label set of a given document with a small probability or one that allows a common background topic in all documentswe did implement these variants in our preliminary experiments but they did not yield better performance than llda in the tasks we consideredhence we do not report them in this paperthe derivation of the algorithm so far has focused on its relationship to ldahowever labeled lda can also be seen as an extension of the event model of a traditional multinomial naive bayes classifier by the introduction of a mixture modelin this section we develop the analogy as another way to understand llda from a supervised perspectiveconsider the case where no document in the collection is assigned two or more labelsnow for a particular document d with label ld labeled lda draws each words topic variable zz from a multinomial constrained to the documents label set ie zz ld for each word position i in the documentduring learning the gibbs sampler will assign each zz to ld while incrementing old effectively counting the occurences of each word type in documents labeled with ldthus in the singly labeled document case the probability of each document under labeled lda is equal to the probability of the document under the multinomial naive bayes event model trained on those same document instancesunlike the multinomial naive bayes classifier labeled lda does not encode a decision boundary for unlabeled documents by comparing pld to pld although we discuss using labeled lda for multilabel classification in section 7labeled ldas similarity to naive bayes ends with the introduction of a second label to any documentin a traditional oneversusrest multinomial naive bayes model a separate classifier for each label would be trained on all documents with that label so each word can contribute a count of 1 to every observed labels word distributionby contrast labeled lda assumes that each document is a mixture of underlying topics so the count mass of single word instance must instead be distributed over the documents observed labelssocial bookmarking websites contain millions of tags describing many of the webs most popular and useful pageshowever not all tags are uniformly appropriate at all places within a documentin the sections that follow we examine mechanisms by which labeled ldas credit assignment mechanism can be utilized to help support browsing and summarizing tagged document collectionsto create a consistent dataset for experimenting with our model we selected 20 tags of medium to high frequency from a collection of documents dataset crawled from delicious a popular social bookmarking website from that larger dataset we selected uniformly at random four thousand documents that contained at least one of the 20 tags and then filtered each documents tag set by removing tags not present in our tag setafter filtering the resulting corpus averaged 781 nonstop words per document with each document having 4 distinct tags on averagein contrast to many existing text datasets our tagged corpus is highly multiply labeled almost 90 of of the documents have more than one tagwe will refer to this collection of data as the delicious tag dataseta first question we ask of labeled lda is how its topics compare with those learned by traditional lda on the same collection of documentswe ran our implementations of labeled lda and lda on the delicious corpus described aboveboth are based on the standard collapsed gibbs sampler with the constraints for labeled lda implemented as in section 2figure 2 comparison of some of the 20 topics learned on delicious by labeled lda and traditional lda with representative words for each topic shown in the boxeslabeled ldas topics are named by their associated tagarrows from righttoleft show the mapping of lda topics to the closest labeled lda topic by cosine similaritytags not shown are design education english grammar history internet language philosophy politics programming reference style writingfigure 2 shows the top words associated with 20 topics learned by labeled lda and 20 topics learned by unsupervised lda on the delicious document collectionlabeled ldas topics are directly named with the tag that corresponds to each topic an improvement over standard practice of inferring the topic name by inspection the topics learned by the unsupervised variant were matched to a labeled lda topic highest cosine similaritythe topics selected are representative compared to labeled lda unmodified lda allocates many topics for describing the largest parts of the the elements of style william strunk jrasserting that one must first know the rules to break them this classic reference book is a musthave for any student and conscientious writerintended for use in which the practice of composition is combined with the study of literatureit gives in brief space the principal requirements of plain english style and concentratesattention on the rules of usage and principles of composition most commonly violated corpus and underrepresents tags that are less uncommon of the 20 topics learned lda learned multiple topics mapping to each of five tags and learned no topics that aligned with six tags in addition to providing automatic summaries of the words best associated with each tag in the corpus labeled ldas credit attribution mechanism can be used to augment the view of a single document with rich contextual information about the documents tagsfigure 3 shows one web document from the collection a page describing a guide to writing english prosethe 10 most common tags for that document are writing reference english grammar style language books book strunk and education the first eight of which were included in our set of 20 tagsin the figure each word that has high posterior probability from one tag has been annotated with that tagthe red words come from the style tag green from the grammar tag blue from the reference tag and black from the education tagin this case the model does very well at assigning individual words to the tags that subjectively seem to strongly imply the presence of that tag on this pagea more polished rendering could add subtle visual cues about which parts of a page are most appropriate for a particular set of tagstag topic id book image pdf review library posted read copyright books title works water map human life work science time world years sleep windows file version linux computerfree system software mac comment god jesus people gospel bible reply lord religion written applications spring open web java pattern eclipse development ajax people day link posted time comments back music jane permalink comments read nice post great april blog march june wordpress news information service web online project site free search home web images design content java css website articles page learning jun quote pro views added check anonymous card core power ghz life written jesus words made man called mark john person fact name house light radio media photography news music travel cover game review street public art health food city history science books llda this classic reference book is a musthave for any student and conscientious writerintended for svm the rules of usage and principles of composition most commonly violatedsearch contents bibliographic language llda the beginning of a sentence must refer to the grammatical subject 8divide words at svm combined with the study of literature it gives in brief space the principal requirements ofanother natural application of labeled ldas credit assignment mechanism is as a means of selecting snippets of a document that best describe its contents from the perspective of a particular tagconsider again the document in figure 3intuitively if this document were shown to a user interested in the tag grammar the most appropriate snippet of words might prefer to contain the phrase rules of usage whereas a user interested in the term style might prefer the title elements of style to quantitatively evaluate labeled ldas performance at this task we constructed a set of 29 recently tagged documents from delicious that were labeled with two or more tags from the 20 tag subset resulting in a total of 149 pairsfor each pair we extracted a 15word window with the highest tagspecific score from the documenttwo systems were used to score each window labeled lda and a collection of onevsrest svms trained for each tag in the systemllda scored each window as the expected probability that the tag had generated each wordfor svms each window was taken as its own document and scored using the tagspecific svms unthresholded scoring function taking the window with the most positive scorewhile a complete solution to the tagspecific snippet extraction quality as extracted by llda and svmthe center column is the number of documenttag pairs for which a systems snippet was judged superiorthe right column is the number of snippets for which all three annotators were in complete agreement in the subset of document scored by all three annotators problem might be more informed by better linguistic features this experimental setup suffices to evaluate both kinds of models for their ability to appropriately assign words to underlying labelsfigure 3 shows some example snippets output by our system for this documentnote that while svms did manage to select snippets that were vaguely on topic labeled ldas outputs are generally of superior subjective qualityto quantify this intuition three human annotators rated each pair of snippetsthe outputs were randomly labeled as system a or system b and the annotators were asked to judge which system generated a better tagspecific document subsetthe judges were also allowed to select neither system if there was no clear winnerthe results are summarized in table 2llda was judged superior by a wide margin of the 149 judgments lldas output was selected as preferable in 72 cases whereas svms was selected in only 21the difference between these scores was highly significant by the sign testto quantify the reliability of the judgments 51 of the 149 documenttag pairs were labeled by all three annotatorsin this group the judgments were in substantial agreement1 with fleiss kappa at 63further analysis of the triplyannotated subset yields further evidence of lldas advantage over svms 33 of the 51 were tagpage pairs where lldas output was picked by at least one annotator as a better snippet and of those 24 were unanimous in that all three judges selected lldas outputby contrast only 10 of the 51 were tagpage pairs where svms output was picked by at least one annotator and of those only 2 were selected unanimouslyin the preceding section we demonstrated how labeled ldas credit attribution mechanism enabled effective modeling within documentsin this section we consider whether llda can be adapted as an effective multilabel classifier for documents as a wholeto answer that question we applied a modified variant of llda to a multilabel document classification problem given a training set consisting of documents with multiple labels predict the set of labels appropriate for each document in a test setmultilabel classification is a well researched problemmany modern approaches incorporate label correlations ji et alothers like our algorithm are based on mixture models however we are aware of no methods that trade off labelspecific word distributions with documentspecific label distributions in quite the same wayin section 2 we discussed learning and inference when labels are observedin the task of multilabel classification labels are available at training time so the learning part remains the same as discussed beforehowever inferring the best set of labels for an unlabeled document at test time is more complex it involves assessing all label assignments and returning the assignment that has the highest posterior probabilityhowever this is not straightforward since there are 2k possible label assignmentsto make matters worse the support of α is different for different label assignmentsalthough we are in the process of developing an efficient sampling algorithm for this inference for the purposes of this paper we make the simplifying assumption that the model reduces to standard lda at inference where the document is free to sample from any of the k topicsthis is a reasonable assumption because allowing the model to explore the whole topic space for each document is similar to exploring all possible label assignmentsthe documents most likely labels can then be inferred by suitably thresholding its posterior probability over topicsas a baseline we use a set of multiple onevsrest svm classifiers which is a popular and extremely competitive baseline used by most previous papers for instancewe scored each model based on microf1 and macrof1 as our evaluation measures while the former allows larger classes to dominate its results the latter assigns an equal weight to all classes providing us complementary informationwe ran experiments on a corpus from the yahoo directory modeling our experimental conditions on the ones described in 2 we considered documents drawn from 8 top level categories in the yahoo directory where each document can be placed in any number of subcategoriesthe results were mixed with svms ahead on one measure labeled lda beat svms on five out of eight datasets on macrof1 but did not win on any datasets on microf1results are presented in table 3because only a processed form of the documents was released the yahoo dataset does not lend itself well to error analysishowever only 33 of the documents in each toplevel category were applied to more than one subcategory so the credit assignment machinery of llda was unused for the majority of documentswe therefore ran an artificial second set of experiments considering only those documents that had been given more than one label in the training dataon these documents the results were again mixed but labeled lda comes out aheadfor macrof1 llda beat svms on four datasets svms beat llda on one dataset and three were a statistical tie3 on microf1 llda did much better than on the larger subset outperforming on four datasets with the other four a statistical tieit is worth noting that the yahoo datasets are skewed by construction to contain many documents with highly overlapping content because each collection is within the same superclass such as arts business etc each subcategories of the named yahoo directory categoriesnumbers in parentheses are standard deviations across runsllda outperforms svms on 5 subsets with macrof1 but on no subsets with microf1 vocabularies will naturally overlap a great deallldas credit attribution mechanism is most effective at partitioning semantically distinct words into their respective label vocabularies so we expect that labeledldas performance as a text classifier would improve on collections with more semantically diverse labelswe also applied our method to text classification on the delicious dataset where the documents are naturally multiply labeled and where the tags are less inherently similar than in the yahoo subcategoriestherefore we expect labeled lda to do better credit assignment on this subset and consequently to show improved performance as a classifier and indeed this is the casewe evaluated llda and multiple onevsrest svms on 4000 documents with the 20 tag subset described in section 3llda and multiple onevsrest svms were trained on the first 80 of documents and evaluated on the remaining 20 with results averaged across 10 random permutations of the datasetthe results are shown in table 4we tuned the svms shared cost parameter c and selected raw term frequency over tfidf weighting based on 4fold crossvalidation on 3000 documents drawn from an independent permutation of the datafor llda we tuned the shared parameters of threshold and proportionality constants in word and topic priorsllda and svm have very similar performance on macrof1 while llda substantially outperforms on microf1in both cases lldas improvement is statistically significantly by a 2tailed paired ttest at 95 confidence multilabel text classification for predicting 20 tags on delicious datallda outperforms svms significantly on both metrics by a 2tailed paired ttest at 95 confidenceone of the main advantages of llda on multiply labeled documents comes from the models documentspecific topic mixture θby explicitly modeling the importance of each label in the document labeled lda can effective perform some contextual word sense disambiguation which suggests why llda can outperform svms on the delicious datasetas a concrete example consider the excerpt of text from the delicious dataset in figure 5the document itself has several tags including design and programminginitially many of the likelihood probabilities p for the words in this excerpt are higher for the label programming than design including content client cms and even designed while design has higher likelihoods for just website and happyhowever after performing inference on this document using llda the inferred document probability for design is much higher than it is for programmingin fact the higher probability for the tag more than makes up the difference in the likelihood for all the words except cms so underline words are generated from the design tag red from the programming tagby themselves most words used here have a higher probability in programming than in designbut because the document as a whole is more about design than programming inferring the documents topicmixture θ enables llda to correctly reassign most words that llda correctly infers that most of the words in this passage have more to do with design than programmingthis paper has introduced labeled lda a novel model of multilabeled corpora that directly addresses the credit assignment problemthe new model improves upon lda for labeled corpora by gracefully incorporating user supervision in the form of a onetoone mapping between topics and labelswe demonstrate the models effectiveness on tasks related to credit attribution within documents including document visualizations and tagspecific snippet extractionan approximation to labeled lda is also shown to be competitive with a strong baseline for multilabel classificationbecause labeled lda is a graphical model in the lda family it enables a range of natural extensions for future investigationfor example the current model does not capture correlations between labels but such correlations might be introduced by composing labeled lda with newer state of the art topic models like the correlated topic model or the pachinko allocation model and with improved inference for unsupervised λ labeled lda lends itself naturally to modeling semisupervised corpora where labels are observed for only some documentsthis project was supported in part by the president of stanford university through the iriss initiatives assessment project
D09-1026
labeled lda a supervised topic model for credit attribution in multilabeled corporaa significant portion of the world text is tagged by readers on social bookmarking websitescredit attribution is an inherent problem in these corpora because most pages have multiple tags but the tags do not always apply with equal specificity across the whole documentsolving the credit attribution problem requires associating each word in a document with the most appropriate tags and vice versathis paper introduces labeled lda a topic model that constrains latent dirichlet allocation by defining a onetoone correspondence between lda latent topics and user tagsthis allows labeled lda to directly learn wordtag correspondenceswe demonstrate labeled lda improved expressiveness over traditional lda with visualizations of a corpus of tagged web pages from deliciouslabeled lda outperforms svms by more than 3 to 1 when extracting tagspecific document snippetsas a multilabel text classifier our model is competitive with a discriminative baseline on a variety of datasetsllda extends standard lda to include supervision for specific target categories and the generative process includes a second observed variable ie each document is explicitly labeled with a target category
fast cheap and creative evaluating translation quality using amazonrsquos mechanical turk manual evaluation of translation quality is generally thought to be excessively time consuming and expensive we explore a fast and inexpensive way of doing it using amazons mechanical turk to pay small sums to a large number of nonexpert annotators for 10 we redundantly recreate judgments from a wmt08 translation task we find that when combined nonexpert judgments have a highlevel of agreement with the existing goldstandard judgments of machine translation quality and correlate more strongly with expert judgments than bleu does we go on to show that mechanical turk can be used to calculate humanmediated translation edit rate to conduct reading comprehension experiments with machine translation and to create high quality reference translations conventional wisdom holds that manual evaluation of machine translation is too timeconsuming and expensive to conductinstead researchers routinely use automatic metrics like bleu as the sole evidence of improvement to translation qualityautomatic metrics have been criticized for a variety of reasons and it is clear that they only loosely approximate human judgmentstherefore having people evaluate translation output would be preferable if it were more practicalin this paper we demonstrate that the manual evaluation of translation quality is not as expensive or as time consuming as generally thoughtwe use amazons mechanical turk an online labor market that is designed to pay people small sums of money to complete human intelligence tests tasks that are difficult for computers but easy for peoplewe show thatsnow et al examined the accuracy of labels created using mechanical turk for a variety of natural language processing tasksthese tasks included word sense disambiguation word similarity textual entailment and temporal ordering of events but not machine translationsnow et al measured the quality of nonexpert annotations by comparing them against labels that had been previously created by expert annotatorsthey report interannotator agreement between expert and nonexpert annotators and show that the average of many nonexperts converges on performance of a single expert for many of their tasksalthough it is not common for manual evaluation results to be reported in conference papers several largescale manual evaluations of machine translation quality take place annuallythese include public forums like the nist mt evaluation workshop iwslt and wmt as well as the projectspecific gono go evaluations for the darpa gale programvarious types of human judgments are usednist collects 5point fluency and adequacy scores iwslt and wmt collect relative rankings and darpa evaluates using hter the details of these are provided later in the paperpublic evaluation campaigns provide a ready source of goldstandard data that nonexpert annotations can be compared toamazon describes its mechanical turk web service1 as artificial artificial intelligencethe name and tag line refer to a historical hoax from the 18th century where an automaton appeared to be able to beat human opponents at chess using a clockwork mechanism but was in fact controlled by a person hiding inside the machinethe mechanical turk web site provides a way to pay people small amounts of money to perform tasks that are simple for humans but difficult for computersexamples of these human intelligence tasks range from labeling images to moderating blog comments to providing feedback on relevance of results for a search queryanyone with an amazon account can either submit hits or work on hits that were submitted by othersworkers are sometimes referred to as turkers and people designing the hits are requesters requesters can specify the amount that they will pay for each item that is completedpayments are frequently as low as 001turkers are free to select whichever hits interest themamazon provides three mechanisms to help ensure quality first requesters can have each hit be completed by multiple turkers which allows higher quality labels to be selected for instance by taking the majority labelsecond the requester can require that all workers meet a particular set of qualications such as sufficient accuracy on a small test set or a minimum percentage of previously accepted submissionsfinally the requester has the option of rejecting the work of individual workers in which case they are not paidthe level of goodfaith participation by turkers is surprisingly high given the generally small nature of the payment2 for complex undertakings like creating data for nlp tasks turkers do not have a specialized background in the subject so there is an obvious tradeoff between hiring individuals from this nonexpert labor pool and seeking out annotators who have a particular expertisewe use mechanical turk as an inexpensive way of evaluating machine translationin this section we measure the level of agreement between expert and nonexpert judgments of translation qualityto do so we recreate an existing set of goldstandard judgments of machine translation quality taken from the workshop on statistical machine translation which conducts an annual largescale human evaluation of machine translation qualitythe experts who produced the goldstandard judgments are computational linguists who develop machine translation systemswe recreated all judgments from the wmt08 germanenglish news translation taskthe output of the 11 different machine translation systems that participated in this task was scored by ranking translated sentences relative to each otherto collect judgements we reproduced the wmt08 web interface in mechanical turk and provided these instructions evaluate machine translation quality rank each translation from best to worst relative to the other choices if you do not know the source language then you can read the reference translation which was created by a professional human translatorthe web interface displaced 5 different machine translations of the same source sentence and had radio buttons to rate themturkers were paid a grand total of 975 to complete nearly 1000 hitsthese hits exactly replicated the 200 screens worth of expert judgments that were collected for the wmt08 germanenglish news translation task with each screen being completed by five different turkersthe turkers were shown a source sentence a reference translation and translations from five mt systemsthey were asked to rank the translations relative to each other assigning scores from best to worst and allowing tieswe evaluate nonexpert turker judges by measuring their interannotator agreement with the wmt08 expert judges and by comparing the correlation coefficient across the rankings of the machine translation systems produced by the two sets of judges equalthe quality of their works variesfigure 2 shows the agreement of individual turkers with expert annotators plotted against the number of hits they completedthe figure shows that their agreement varies considerably and that turker who completed the most judgments was among the worst performingto avoid letting careless annotators drag down results we experimented with weighted votingwe weighted votes in two ways turker agreed with the rest of the turkers over the whole data setthis does not require any gold standard calibration datait goes beyond simple voting because it looks at a turkers performance over the entire set rather than on an itembyitem basisfigure 1 shows that these weighting mechanisms perform similarly wellfor this task deriving weights from agreement with other nonexperts is as effective as deriving weights from expertsmoreover by weighting the votes of five turkers nonexpert judgments perform at the upper bound of expertexpert correlationall correlate more strongly than bleu we are able to achieve the same rate of agreement with experts as they achieve with each othercorrelation when ranking systems in addition to measuring agreement with experts at the sentencelevel we also compare nonexpert systemlevel rankings with expertsfollowing callisonburch et al we assigned a score to each of the 11 mt systems based on how often its translations were judged to be better than or equal to any other systemthese scores were used to rank systems and we measured spearmans ρ against the systemlevel ranking produced by expertsfigure 3 shows how well the nonexpert rankings correlate with expert rankingsan upper bound is indicated by the expertexpert barthis was created using a fivefold cross validation where we used 20 of the expert judgments to rank the systems and measured the correlation against the rankings produced by the other 80 of the judgmentsthis gave a ρ of 078all ways of combining the nonexpert judgments resulted in nearly identical correlation and all produced correlation within the range of with what we would experts tothe rankings produced using mechanical turk had a much stronger correlation with the wmt08 expert rankings than the blue score didit should be noted that the wmt08 data set does not have multiple reference translationsif multiple references were used that bleu would likely have stronger correlationhowever it is clear that the cost of hiring professional translators to create multiple references for the 2000 sentence test set would be much greater than the 10 cost of collecting manual judgments on mechanical turkin this section we report on a number of creative uses of mechanical turk to do more sophisticated taskswe give evidence that turkers can create high quality translations for some languages which would make creating multiple reference translations for bleu less costly than using professional translatorswe report on experiments evaluating translation quality with hter and with reading comprehension testsin addition to evaluating machine translation quality we also investigated the possibility of using mechanical turk to create additional reference translations for use with automatic metrics like bleubefore trying this we were skeptical that turkers would have sufficient language skills to produce translationsour translation hit had the following instructions we solicited translations for 50 sentences in french german spanish chinese and urdu and designed the hit so that five turkers would translate each sentencefiltering machine translation upon inspecting the turkers translations it became clear that many had ignored the instructions and had simply cutandpaste machine translation rather then translating the text themselveswe therefore set up a second hit to filter these outafter receiving the score when one ldc translator is compared against the other 10 translators this gives an upper bound on the expected qualitythe turkers translation quality falls within a standard deviation of ldc translators for spanish german and chinesefor all languages turkers produce significantly better translations than an online machine translation system translations we had a second group of turkers clean the resultswe automatically excluded turkers whose translations were flagged 30 of the time or morequality of turkers translations our 50 sentence test sets were selected so that we could compare the translations created by turkers to translations commissioned by the linguistics data consortiumfor the chinese french spanish and german translations we used the the multipletranslation chinese corpus3 this corpus has 11 reference human translations for each chinese source sentencewe had bilingual graduate students translate the first 50 english sentences of that corpus into french german and spanish so that we could reuse the multiple english reference translationsthe urdu sentences were taken from the nist mt eval 2008 urduenglish test set4 which includes three distinct english translations for every urdu source sentencefigure 4 shows the turkers translation quality in terms of the bleu metricto establish an upper bound on expected quality we determined what the bleu score would be for a professional translator when measured against other professionalswe calculated a bleu score for each of the 11 ldc translators using the other 10 translators as the reference setthe average bleu score for ldc2002t01 was 054 with a standard deviation of 007the average bleu for the urdu test set is lower because it has fewer reference translationsto measure the turkers translation quality we randomly selected translations of each sentence from turkers who passed the detect mt hit and compared them against the same sets of 10 reference translations that the ldc translators were compared againstwe randomly sampled the turkers 10 times and calculated averages and standard deviations for each source languagefigure 4 the bleu scores for the turkers translations of spanish german and chinese are within the range of the ldc translatorsfor all languages the quality is significantly higher than an online machine translation systemwe used yahoos babelfish for spanish german french and chinese5 was likely and babylon for urdudemographics we collected demographic information about the turkers who completed the translation taskwe asked how long they had spoken the source language how long they had spostatistics on the left are for people who appeared to do the task honestlythe statistics on the right are for people who appeared to be using mt ken english what their native language was and where they livedtable 1 gives their repliescost and speed we paid turkers 010 to translate each sentence and 0006 to detect whether a sentence was machine translatedthe cost is low enough that we could create a multiple reference set quite cheaply it would cost less than 1000 to create 4 reference translations for 2000 sentencesthe time it took for the 250 translations to be completed for each language variedit took less than 4 hours for spanish 20 hours for french 225 hours for german 2 days for chinese and nearly 4 days for urduhumanmediated translation edit rate is the official evaluation metric of the darpa gale programthe evaluation is conducted annually by the linguistics data consortium and it is used to determine whether the teams participating the program have met that years benchmarksthese evaluations are used as a go no go determinant of whether teams will continue to receive fundingthus each team have a strong incentive to get as good a result as possible under the metriceach of the three gale teams encompasses multiple sites and each has a collection of machine translation systemsa general strategy employed by all teams is to perform system combination over these systems to produce a synthetic translation that is better than the sum of its parts the contribution of each component system is weighted by the expectation that it will produce good outputto our knowledge none of the teams perform their own hter evaluations in order to set these weightswe evaluated the feasibility of using mechanical turk to perform hterwe simplified the official gale postediting guidelines we provided these instructions edit machine translation your task is to edit the machine translation making as few changes as possible so that it matches the meaning of the human translation and is good englishplease follow these guidelines edit rate decreases as the number of editors increases from zero and fivewe displayed 10 sentences from a news articlein one column was the reference english translation in the other column were text boxes containing the mt output to be editedto minimize the edit rate we collected edits from five different turkers for every machine translated segmentwe verified these with a second hit were we prompted turkers to for the final score we choose the edited segment which passed the criteria and which minimized the edit distance to the unedited machine translation outputif none of the five edits was deemed to be acceptable then we used the edit distance between the mt and the referencesetup we evaluated five machine translation systems using hterthese systems were selected from wmt09 we wanted a spread in quality so we took the top two and bottom two systems from the germanenglish task and the top system from the frenchenglish task based on the results of the wmt09 evaluation we would expect the see the following ranking from the least edits to the most edits googlefren googledeen rbmt5deen genevadeen and trombledeenresults table 2 gives the hter scores for the five systemstheir ranking is as predicted indicating that the editing is working as expectedthe table reports averaged scores when the five annotators are subsampledthis gives a sense of how much each additional editor is able to minimize the score for each systemthe difference between the ter score with zero editors and the hter five editors is greatest for the rmbt5 system which has a delta of 29 and is smallest for jhutromble with 07one interesting technique for evaluating machine translation quality is through reading comprehension questions about automatically translated textthe quality of machine translation systems can be quantified based on how many questions are answered correctlyjones et al evaluated translation quality using a reading comprehension test the defense language proficiency test which is administered to military translatorsthe dlpt contains a collection of foreign articles of varying levels of difficulties and a set of short answer questionsjones et al used the arabic dlpt to do a study of machine translation quality by automatically translating the arabic documents into english and seeing how many human subjects could successfully pass the examthe advantage of this type of evaluation is that the results have a natural interpretationthey indicate how understandable the output of a machine translation system is better than bleu does and better than other manual evaluation like the relative rankingdespite this advantage evaluating mt through reading comprehension has not caught on due to the difficulty of administering it and due to the fact that the dlpt or similar tests are not publicly availablewe conducted a reading comprehension evaluation using mechanical turkinstead of simply administering the test on mechanical turk we used it for all aspects from test creation to answer gradingour procedure was as follows test creation we posted human translations of foreign news articles and ask tukers to write three questions and provide sample answerswe gave simple instructions on what qualifies as a good reading comprehension questionsystem googlefren googledeen rbmt5deen genevadeen trombledeen question selection we posted the questions for each article back to mechanical turk and asked other turkers to vote on whether each question was a good and to indicate if it was redundant with any other questions in the setwe sorted questions to maximize the votes and minimized redundancies using a simple perl script which discarded questions below a threshold and eliminated all redundanciestaking the test we posted machine translated versions of the foreign articles along with the questions and had turkers answer themwe ensured that no one would see multiple translations of the same articlegrading the answers we aggregated the answers and used mechanical turk to grade themwe showed the human translation of the article one question the sample answer and displayed all answers to itafter the turkers graded the answers we calculated the percentage of questions that were answered correctly for each systemturkers created 90 questions for 10 articles which were subsequently filtered down to 47 good questions ranging from 36 questions per article25 turkers answered questions about each translated articleto avoid them answering the questions multiple times we randomly selected which systems translation was shown to themeach systems translation was displayed an average of 5 reference 094 googlefren 085 googledeen 080 rbmt5deen 077 genevadeen 063 jhutrombledeen 050 times per articleas a control we had three turkers answer the reading comprehension questions using the reference translationtable 3 gives the percent of questions that were correctly answered using each of the different systems outputs and using the reference translationthe ranking is exactly what we would expect based on the hter scores and on the human evaluation of the systems in wmt09this again helps to validate that the reading comprehension methodologythe scores are more interpretable than blue scores and than the wmt09 relative rankings since it gives an indication of how understandable the mt output isappendix a shows some sample questions and answers for an articlemechanical turk is an inexpensive way of gathering human judgments and annotations for a wide variety of tasksin this paper we demonstrate that it is feasible to perform manual evaluations of machine translation quality using the web servicethe low cost of the nonexpert labor found on mechanical turk is cheap enough to collect redundant annotations which can be utilized to ensure translation qualityby combining the judgments of many nonexperts we are able to achieve the equivalent quality of expertsthe suggests that manual evaluation of translation quality could be straightforwardly done to validate performance improvements reported in conference papers or even for mundane tasks like tracking incremental system updatesthis challenges the conventional wisdom which has long held that automatic metrics must be used since manual evaluation is too costly and timeconsumingwe have shown that mechanical turk can be used creatively to produce quite interesting thingswe showed how a reading comprehension test could be created administered and graded with only very minimal interventionwe believe that it is feasible to use mechanical turk for a wide variety of other machine translated tasks like creating word alignments for sentence pairs verifying the accuracy of document and sentencealignments performing nonsimulated active learning experiments for statistical machine translation even collecting training data for low resource languages like urduthe cost of using mechanical turk is low enough that we might consider attempting quixotic things like humanintheloop minimum error rate training or doubling the amount of training data available for urduthis research was supported by the euromatrixplus project funded by the european commission and by the us national science foundation under grant iis0713448the views and findings are the authors alonethe actress heather locklear amanda on the popular series melrose place was arrested this weekend in santa barbara after driving under the influence of drugsa witness saw her performing inappropriate maneuvers while trying to take her car out of a parking space in montecito as revealed to people magazine by a spokesman for the californian highway policethe witness stated that around 430pm ms locklear hit the accelerator very roughly making excessive noise and trying to take the car out from the parking space with abrupt back and forth maneuverswhile reversing she passed several times in front of his sunglasses shortly after the witness who at first apparently had not recognized the actress saw ms locklear stopping in a nearby street and leaving the vehicleit was this person who alerted the emergency services because he was concerned about ms locklears life when the patrol arrived the police found the actress sitting inside her car which was partially blocking the roadshe seemed confused so the policemen took her to a specialized centre for drugs and alcohol and submitted her a testaccording to a spokesman for the police the actress was cooperative and excessive alcohol was ruled out from the beginning even if as the officers initially observed we believe ms locklear was under the influences drugs ms locklear was arrested under suspicion of driving under the influence of some unspecified substance and imprisoned in the local jail at 700pm to be released some hours latertwo months ago ms locklear was released from a specialist clinic in arizona where she was treated after an episode of anxiety and depression4 questions were selected she was arested on suspicion of driving under the influence of drugsshe was cured for anxiety and depressionanswers to where was ms locklear two months ago that were judged to be correct arizona hospital for treatment of depression at a treatmend clinic in arizona in the arizona clinic being treated for nervous breakdown a clinic in arizona arizona under treatment for depression she was a patient in a clinic in arizona undergoing treatment for anxiety and depression in an arizona mental health facility a clinic in arizona in a clinic being treated for anxiety and depression at an arizona clinic these answers were judged to be incorrect locklear was retired in arizona arizona arizona in arizona mslocklaer were laid off after a treatment out of the clinic in arizona
D09-1030
fast cheap and creative evaluating translation quality using amazonrsquos mechanical turkmanual evaluation of translation quality is generally thought to be excessively time consuming and expensivewe explore a fast and inexpensive way of doing it using amazons mechanical turk to pay small sums to a large number of nonexpert annotatorsfor 10 we redundantly recreate judgments from a wmt08 translation taskwe find that when combined nonexpert judgments have a highlevel of agreement with the existing goldstandard judgments of machine translation quality and correlate more strongly with expert judgments than bleu doeswe go on to show that mechanical turk can be used to calculate humanmediated translation edit rate to conduct reading comprehension experiments with machine translation and to create high quality reference translationswe find that lazy annotators tended to stay longer and do more annotationswe treat evaluation as a weighted voting problem where each annotator contribution is weighted by agreement with either a gold standard or with other annotatorswe show the effectiveness of crowd sourcing as a method of accomplishing labor intensive natural language processing tasks
an empirical study of semisupervised structured conditional models for dependency parsing this paper describes an empirical study of highperformance dependency parsers based on a semisupervised learning approach we describe an extension of semisupervised structured conditional models to the dependency parsing problem whose framework is originally proposed in moreover we introduce two extensions related to dependency parsing the first extension is to combine ssscms with another semisupervised approach described in the second extension is to apply the approach to secondorder parsing models such as those described in using a twostage semisupervised learning approach we demonstrate the effectiveness of our proposed methods on dependency parsing experiments using two widely used test collections the penn treebank for english and the prague dependency treebank for czech our best results on test data in the above datasets achieve 9379 parentprediction accuracy for en recent work has successfully developed dependency parsing models for many languages using supervised learning algorithms semisupervised learning methods which make use of unlabeled data in addition to labeled examples have the potential to give improved performance over purely supervised methods for dependency parsingit is often straightforward to obtain large amounts of unlabeled data making semisupervised approaches appealing previous work on semisupervised methods for dependency parsing includes in particular koo et al describe a semisupervised approach that makes use of cluster features induced from unlabeled data and gives stateoftheart results on the widely used dependency parsing test collections the penn treebank for english and the prague dependency treebank for czechthis is a very simple approach but provided significant performance improvements comparing with the stateoftheart supervised dependency parsers such as this paper introduces an alternative method for semisupervised learning for dependency parsingour approach basically follows a framework proposed in we extend it for dependency parsing which we will refer to as a semisupervised structured conditional model in this framework a structured conditional model is constructed by incorporating a series of generative models whose parameters are estimated from unlabeled datathis paper describes a basic method for learning within this approach and in addition describes two extensionsthe first extension is to combine our method with the clusterbased semisupervised method of the second extension is to apply the approach to secondorder parsing models more specifically the model of using a twostage semisupervised learning approachwe conduct experiments on dependency parsing of english and czech our experiments investigate the effectiveness of 1 the basic ssscm for dependency parsing 2 a combination of the ssscm with koo et al s semisupervised approach 3 the twostage semisupervised learning approach that inin this model v1 vk are scalar parameters that may be positive or negative q1 qk are functions that are trained on unlabeled datathe vj parameters will dictate the relative strengths of the functions q1 qk and will be trained on labeled datafor convenience we will use v to refer to the vector of parameters v1 vk and q to refer to the set of generative models q1 qkthe full model is specified by values for w v and qwe will write p to refer to the conditional distribution under parameter values w v qwe will describe a threestep parameter estimation method that 1 initializes the q functions to be uniform distributions and estimates parameter values w and v from labeled data 2 induces new functions q1 qk from unlabeled data based on the distribution defined by the w v q values from step 3 reestimates w and v on the labeled examples keeping the q1 qk from step fixedthe end result is a model that combines supervised training with generative models induced from unlabeled datawe now describe how the generative models q1 qk are defined and how they are induced from unlabeled datathese models make direct use of the featurevector definition f used in the original fully supervised dependency parserthe first step is to partition the d features in f into k separate feature vectors r1 rk in our experiments on dependency parsing we partitioned f into up to over 140 separate feature vectors corresponding to different feature typesfor example one feature vector rj might include only those features corresponding to word bigrams involved in dependencies involved in a dependency we then define a generative model that assigns a probability corporates a secondorder parsing modelin addition we evaluate the ssscm for english dependency parsing with large amounts of unlabeled datathroughout this paper we will use x to denote an input sentence and y to denote a labeled dependency structuregiven a sentence x with n words a labeled dependency structure y is a set of n dependencies of the form where h is the index of the headword in the dependency m is the index of the modifier word and l is the label of the dependencywe use h 0 for the root of the sentencewe assume access to a set of labeled training examples xz yzz_1 and in addition a set of unlabeled examples xzm1in conditional loglinear models for dependency parsing a distribution over dependency structures for a sentence x is defined as follows here f is a feature vector representing the dependency in the context of the sentence x in this paper we extend the definition of g to include features that are induced from unlabeled dataspecifically we define to the djdimensional feature vector rjthe parameters of this model are θj1 θjdj they form a multinomial distribution with the constraints that θja 0 and pa θja 1this model can be viewed as a very simple model that defines a distribution over feature vectors rj e rdjthe next section describes how the parameters θja are trained on unlabeled datagiven parameters θja we can simply define the functions q1 qk to be log probabilities under the generative model we modify this definition slightly be introducing scaling factors cja 0 and defining in our experiments cja is simply a count of the number of times the feature indexed by appears in unlabeled datathus more frequent features have their contribution downweighted in the modelwe have found this modification to be beneficialwe now describe the method for estimating the parameters θja of the generative modelswe assume initial parameters w v q which define a distribution p over dependency structures for each unlabeled example x0iwe will reestimate the generative models q based on unlabeled examplesthe likelihood function on unlabeled data is defined as where q0 j is as defined in eq3this function resembles the q function used in the them algorithm where the hidden labels are filled in using the conditional distribution pit is simple to show that the estimates θja that maximize the function in eq5 can be defined as followsfirst define a vector of expected counts based on w v q as note that it is straightforward to calculate these expected counts using a variant of the insideoutside algorithm applied to the dependencyparsing data structures for projective dependency structures or the matrixtree theorem for nonprojective dependency structuresthe estimates that maximize eq5 are then in a slight modification we employ the following estimates in our model where η 1 is a parameter of the model this corresponds to a map estimate under a dirichlet prior over the θja parametersthis section describes the full parameter estimation methodthe input to the algorithm is a set of labeled examples xi yini1 a set of unlabeled examples x0imi1 a featurevector definition f and a partition of f into k feature vectors r1 rk which underly the generative modelsthe output from the algorithm is a parameter vector w a set of generative models q1 qk and parameters v1 vk which define a probabilistic dependency parsing model through eqs1 and 2the learning algorithm proceeds in three steps step 1 estimation of a fully supervised modelwe choose the initial value q0 of the generative models to be the uniform distribution ie we set θja 1dj for all j awe then define the regularized loglikelihood function for the labeled examples with the generative model fixed at q0 to be this is a conventional regularized loglikelihood function as commonly used in crf modelsthe parameter c 0 dictates the level of regularization in the modelwe define the initial parameters arg maxv lthese parameters can be found using conventional methods for estimating the parameters of regularized loglikelihood functions note that the gradient of the loglikelihood function can be calculated using the insideoutside algorithm applied to projective dependency parse structures or the matrixtree theorem applied to nonprojective structuresstep 2 estimation of the generative modelsin this step expected count vectors r1 rk are first calculated based on the distribution pgenerative model parameters oja are calculated through the definition in eq6 these estimates define updated generative models q1j for j 1 k through eq4we refer to the new values for the generative models as q1step 3 reestimation of w and v in the final step w1 and v1 are estimated as arg maxv l where l is defined in an analogous way to lthus w and v are reestimated to optimize loglikelihood of the labeled examples with the generative models q1 estimated in step 2the final output from the algorithm is the set of parameters note that it is possible to iterate the methodsteps 2 and 3 can be repeated multiple times but in our experiments we only performed these steps oncekoo et al describe a semisupervised approach that incorporates clusterbased features and that gives competitive results on dependency parsing benchmarksthe method is a twostage approachfirst hierarchical word clusters are derived from unlabeled data using the brown et al clustering algorithm second a new feature set is constructed by representing words by bitstrings of various lengths corresponding to clusters at different levels of the hierarchythese features are combined with conventional features based on words and partofspeech tagsthe new feature set is then used within a conventional discriminative supervised approach such as the averaged perceptron algorithmthe important point is that their approach uses unlabeled data only for the construction of a new feature set and never affects to learning algorithmsit is straightforward to incorporate clusterbased features within the ssscm approach described in this paperwe simply use the clusterbased featurevector representation f introduced by as the basis of our approachprevious work has shown that secondorder parsing models which include information from sibling or grandparent relationships between dependencies can give significant improvements in accuracy over firstorder parsing modelsin principle it would be straightforward to extend the ssscm approach that we have described to secondorder parsing modelsin practice however a bottleneck for the method would be the estimation of the generative models on unlabeled datathis step requires calculation of marginals on unlabeled datasecondorder parsing models generally require more costly inference methods for the calculation of marginals and this increased cost may be prohibitive when large quantities of unlabeled data are employedwe instead make use of a simple twostage approach for extending the ssscm approach to the secondorder parsing model of in the first stage we use a firstorder parsing model to estimate generative models q1 qk from unlabeled datain the second stage we incorporate these generative models as features within a secondorder parsing modelmore precisely in our approach we first train a firstorder parsing model by step 1 and 2 exactly as described in section 24 to estimate w0 v0 and q1then we substitute step 3 as a supervised learning such as mira with a secondorder parsing model which incorporates q1 as a realvalues featureswe refer this twostage approach to as twostage ssscmin our experiments we use the 1best mira algorithm 1 as a and unlabeled data used in our experiments parameterestimation method for the secondorder parsing modelin particular we perform the following optimizations on each update t 1 t for reestimating w and v where l represents the loss between correct output of ith sample yi and ythen the scoring function s for each y can be defined as follows where b represents a tunable scaling factor and f1 and f2 represent the feature vectors of first and secondorder parsing parts respectivelywe now describe experiments investigating the effectiveness of the ssscm approach for dependency parsingthe experiments test basic firstorder parsing models as well as the extensions to clusterbased features and secondorder parsing models described in the previous sectionwe conducted experiments on both english and czech datawe used the wall street journal sections of the penn treebank iii as a source of labeled data for english and the prague dependency treebank 10 for czechto facilitate comparisons with previous work we used exactly the same training development and test sets as those described in the english dependencyparsing data sets were constructed using a standard set of headselection rules to convert the phrase structure syntax of the treebank to dependency tree representationswe split the data into three parts sections 0221 for training section 22 for development and section 23 for testthe czech data sets were obtained from the predefined trainingdevelopmenttest partition in the pdtthe unlabeled data for english was derived from the brown laboratory for linguistic information processing corpus 2 giving a total of 1796379 sentences and 43380315 tokensthe raw text section of the pdt was used for czech giving 2349224 sentences and 39336570 tokensthese data sets are identical to the unlabeled data used in and are disjoint from the training development and test setsthe datasets used in our experiments are summarized in table 1in addition we will describe experiments that make use of much larger amounts of unlabeled dataunfortunately we have no data available other than pdt for czech this is done only for english dependency parsingtable 2 shows the detail of the larger unlabeled data set used in our experiments where we eliminated sentences that have more than 128 tokens for computational reasonsnote that the total size of the unlabeled data reaches 372g tokens which is approximately 4000 times larger than the size of labeled training datain general we will assume that the input sentences include both words and partofspeech tagsour baseline features are very similar to those described in these features track word and pos bigrams contextual features surrounding dependencies distance features and so onenglish pos tags were assigned by mxpost which was trained on the training data described in section 41czech pos tags were obtained by the following two steps first we used featurebased tagger included with the pdt3 and then we used the method described in to convert the assigned rich pos tags into simplified pos tagsin a second set of experiments we make use of the feature set used in the semisupervised approach of we will refer to this as the clusterbased feature set the bllip and pdt unlabeled data sets shown in table 1 were used to construct the hierarchical clusterings used within the approachnote that when this feature set is used within the ssscm approach the same set of unlabeled data is used to both induce the clusters and to estimate the generative models within the ssscm modelas described in section 22 the generative models in the ssscm approach are defined through a partition of the original feature vector f into k feature vectors r1 rkwe follow a similar approach to that of in partitioning f where the k different feature vectors correspond to different feature types or feature templatesnote that in general we are not necessary to do as above this is one systematic way of a feature design for this approachall results presented in our experiments are given in terms of parentprediction accuracy on unla3training development and test data in pdt already contains pos tags assigned by the featurebased tagger beled dependency parsingwe ignore the parentpredictions of punctuation tokens for english while we retain all the punctuation tokens for czechthese settings match the evaluation setting in previous work such as we used the method proposed by for our secondorder parsing modelsince this method only considers projective dependency structures we projectivized the pdt training data in the same way as we used a nonprojective model trained using an application of the matrixtree theorem for the firstorder czech models and projective parsers for all other modelsas shown in section 2 ssscms with 1storder parsing models have two tunable parameters c and q corresponding to the regularization constant and the dirichlet prior for the generative modelswe selected a fixed value q 2 which was found to work well in preliminary experiments4 the value of c was chosen to optimize performance on development datanote that c for supervised scms were also tuned on development datafor the twostage ssscm for incorporating secondorder parsing model we have additional one tunable parameter b shown in eq8this was also chosen by the value that provided the best performance on development datain addition to providing results for models trained on the full training sets we also performed experiments with smaller labeled training setsthese training sets were either created through random sampling or by using a predefined subset of document ids from the labeled training datatable 3 gives results for the ssscm method under various configurations for first and secondorder parsing models with and without the cluster features of and for varying amounts of labeled datathe remainder of this section discusses these results in more detailwe can see from the results in table 3 that our semisupervised approach consistently gives gains datasupervised scm and supervised mira are the baseline first and secondorder approaches ssscm and 2stage ssscm are the first and secondorder approaches described in this paperbaseline refers to models without clusterbased features cl refers to models which make use of clusterbased features in performance under various sizes of labeled datanote that the baseline methods that we have used in these experiments are strong baselinesit is clear that the gains from our method are larger for smaller labeled data sizes a tendency that was also observed in one important observation from the results in table 3 is that ssscms can successfully improve the performance over a baseline method that uses the clusterbased feature set this is in spite of the fact that the generative models within the ssscm approach were trained on the same unlabeled data used to induce the clusterbased featurestable 3 also shows the effectiveness of the twostage approach that integrates the ssscm method within a secondorder parserthis suggests that the ssscm method can be effective in providing features used within a separate learning algorithm providing that this algorithm can make use of realvalued featuresfigure 1 shows the dependency parsing accuracy on english as a function of the amount of unlabeled data used within the ssscm approachwe can see that performance does improve as more unlabeled data is added this trend is seen both with and without clusterbased featuresin addition table 4 shows the performance of our proposed method using 372 billion tokens of unlabeled datanote however that the gain in performance as unlabeled data is added is not as sharp as might be hoped with a relatively modest difference in performance for 434 million tokens vs 372 billion tokens of unlabeled datathe main computational challenge in our approach is the estimation of the generative models q from unlabeled data particularly when the amount of unlabeled data used is largein our implementation on the 43m token bllip corpus using baseline features it takes about 5 hours to compute the expected counts required to estimate the parameters of the generative models on a single 293ghz xeon processorit takes roughly 18 days of computation to estimate the generative models from the larger corpusfortunately it is simple to parallelize this step our method takes a few hours on the larger data set when parallelized across around 300 separate processesnote that once the generative models have been estimated decoding with the model or training the model on labeled data is relatively inexpensive essentially taking the same amount of computation as standard dependencyparsing approachesfinally table 5 displays the final results on test datathere results are obtained using the best setting in terms of the development data performancenote that the english dependency parsing results shown in the table were achieved using 372 billion tokens of unlabeled datathe improvements on test data are similar to those observed on the development datato determine statistical significance we tested the difference of parentprediction errorrates at the sentence level using a paired wilcoxon signed rank testall eight comparisons shown in table 5 are significant with p 001table 6 shows the performance of a number of stateoftheart approaches on the english and czech data setsfor both languages our approach gives the best reported figures on these datasetsour results yield relative error reductions of roughly 27 and 20 over mcdonald and pereira s secondorder supervised dependency parsers and roughly 9 and 7 over the previous best results provided by koo et als secondorder semisupervised dependency parsersnote that there are some similarities between our twostage semisupervised learning approach and the semisupervised learning method introduced by which is an extension of the method described by in particular both methods use a twostage approach they first train generative models or auxiliary problems from unlabeled data and then they incorporate these trained models into a supervised learning algorithm as real valued featuresmoreover both methods make direct use of existing featurevector definitions f in inducing representations from unlabeled datathis paper has described an extension of the semisupervised learning approach of to the dependency parsing problemin addition we have described extensions that incorporate the clusterbased features of koo et al and that allow the use of secondorder parsing modelswe have described experiments that show that the approach gives significant improvements over stateoftheart methods for dependency parsing performance improves when the amount of unlabeled data is increased from 438 million tokens to 372 billion tokensthe approach should be relatively easily applied to languages other than english or czechwe stress that the ssscm approach requires relatively little handengineering it makes direct use of the existing featurevector representation f used in a discriminative model and does not require the design of new featuresthe main choice in the approach is the partitioning of f into components r1 rk which in our experience is straightforward
D09-1058
an empirical study of semisupervised structured conditional models for dependency parsingthis paper describes an empirical study of highperformance dependency parsers based on a semisupervised learning approachwe describe an extension of semisupervised structured conditional models to the dependency parsing problem whose framework is originally proposed in moreover we introduce two extensions related to dependency parsing the first extension is to combine ssscms with another semisupervised approach described in the second extension is to apply the approach to secondorder parsing models such as those described in using a twostage semisupervised learning approachwe demonstrate the effectiveness of our proposed methods on dependency parsing experiments using two widely used test collections the penn treebank for english and the prague dependency treebank for czechour best results on test data in the above datasets achieve 9379 parentprediction accuracy for english and 8805 for czechwe present a very effective semisupervised approach in which features from multiple generative models estimated on unlabeled data are combined in a discriminative system for structured prediction
parser adaptation and projection with quasisynchronous grammar features we connect two scenarios in structured parser trained on one corpus to another annotation style and annotations from one to another we propose quasigrammar features for these structured learning tasks that is we score a aligned pair of source and target trees based on local features of the trees and the alignment our quasisynchronous model assigns positive probability to any alignment of any trees in contrast to a synchronous grammar which would insist on some form of structural parallelism in monolingual dependency parser adaptation we achieve high accuracy in translating among multiple annotation styles for the same sentence on the more difficult problem of crosslingual parser projection we learn a dependency parser for a target language by using bilingual text an english parser and automatic word alignments our experiments show that unsupervised qg projection improves on parses trained using only highprecision projected annotations and far outperforms by more than 35 absolute dependency accuracy learning an unsupervised parser from raw targetlanguage text alone when a few targetlanguage parse trees are available projection gives a boost equivalent to doubling the number of targetlanguage trees first author would like to thank the center for intelligent information retrieval at umass amherst we would also like to thank noah smith and rebecca hwa for helpful discussions and the anonymous reviewers for their suggestions for improving the paper consider the problem of learning a dependency parser which must produce a directed tree whose vertices are the words of a given sentencethere are many differing conventions for representing syntactic relations in dependency treessay that we wish to output parses in the prague style and so have annotated a small target corpuseg 100 sentenceswith those conventionsa parser trained on those hundred sentences will achieve mediocre dependency accuracy but what if we also had a large number of trees in the conll style ideally they should help train our parserbut unfortunately a parser that learned to produce perfect conllstyle trees would for example get both links wrong when its coordination constructions were evaluated against a praguestyle gold standard if it were just a matter of this one construction the obvious solution would be to write a few rules by hand to transform the large source training corpus into the target stylesuppose however that there were many more ways that our corpora differedthen we would like to learn a statistical model to transform one style of tree into anotherwe may not possess handannotated training data for this treetotree transformation taskthat would require the two corpora to annotate some of the same sentences in different stylesbut fortunately we can automatically obtain a noisy form of the necessary pairedtree training dataa parser trained on the source corpus can parse the sentences in our target corpus yielding trees in the source stylewe will then learn a tree transformation model relating these noisy source trees to our known trees in the target stylethis model should enable us to convert the original large source corpus to target style giving us additional training data in the target stylefor many target languages however we do not have the luxury of a large parsed source corpus in the language even one in a different style or domain as abovethus we may seek other forms of data to augment our small target corpusone option would be to leverage unannotated text but we can also try to transfer syntactic information from a parsed source corpus in another languagethis is an extreme case of outofdomain datathis leads to the second task of this paper learning a statistical model to transform a syntactic analysis of a sentence in one language into an analysis of its translationtree transformations are often modeled with synchronous grammarssuppose we are given a sentence w in the source language and its translation w into the target languagetheir syntactic parses t and t are presumably not independent but will tend to have some parallel or at least correlated structureso we could jointly model the parses t t and the alignment a between them with a model of the form psuch a joint model captures how t a t mutually constrain each other so that even partial knowledge of some of these three variables can help us to recover the others when training or decoding on bilingual textthis idea underlies a number of recent papers on syntaxbased alignment grammar induction from bitext parser projection as well as full joint parsing in this paper we condition on the 1best source tree tas for the alignment a our models either condition on the 1best alignment or integrate the alignment outour models are thus of the form p or in the generative case pwe intend to consider other formulations in future workso far this is very similar to the monolingual parser adaptation scenario but there are a few key differencessince the source and target sentences in the bitext are in different languages there is no longer a trivial alignment between the words of the source and target treesgiven word alignments we could simply try to project dependency links in the source tree onto the target texta linkbylink projection however could result in invalid trees on the target side with cycles or disconnected wordsinstead our models learn the necessary transformations that align and transform a source tree into a target tree by means of quasisynchronous grammar featuresfigure 2 shows an example of bitext helping disambiguation when a parser is trained with only a small number of chinese treeswith the help of the english tree and alignment the parser is able to recover the correct chinese dependencies using qg featuresincorrect edges from the monolingual parser are shown with dashed linesthe parser is able to recover the longdistance dependency from the first chinese word to the last while skipping over the intervening noun phrase that confused the undertrained monolingual parseralthough due to the auxiliary verb china and begun are siblings in english and not in direct dependency the qg features still leverage this indirect projectionwe start by describing the features we use to augment conditional and generative parsers when scoring pairs of trees then we discuss in turn monolingual and crosslingual parser adaptationfinally we present experiments on crosslingual parser projection in conditions when no target language trees are available for training and when some trees are available what should our model of source and target trees look likein our view traditional approaches based on synchronous grammar are problematic both computationally and linguisticallyfull inference takes o time or worse yet synchronous models only consider a limited hypothesis space eg parses must be projective and alignments must decompose according to the recursive parse structurethe synchronous models probability mass function is also restricted to decompose in this way so it makes certain conditional independence assumptions put another way it can evaluate only certain properties of the triple we instead model as an arbitrary graph that includes dependency links among the words of each sentence as well as arbitrary alignment links between the words of the two sentencesthis permits nonsynchronous and manytomany alignmentsthe only hard constraint we impose is that the dependency links within each sentence must constitute a valid monolingual parsea directed projective spanning tree1 given the two sentences w w0 our probability distribution over possible graphs considers local features of the parses the alignment and both jointlythus we learn what local syntactic configurations tend to occur in each language and how they correspond across languagesas a result we might learn that parses are mostly synchronous but that there are some systematic crosslinguistic 1nonprojective parsing would also be possible divergences and some instances of sloppy translationour model is thus a form of quasisynchronous grammar in that paper qg was applied to word alignment and has since found applications in question answering paraphrase detection and machine translation all the models in this paper are conditioned on the source tree t0conditionallytrained models of adaptation and projection also condition on the target string w and its alignment a to w0 and thus have the form p the unsupervised generative projection models in 5 have the form pthe score s of a given tuple of trees words and alignment can thus be written as a dot product of weights w with features f and g wjgj j the features f look only at target words and dependenciesin the conditional models of 3 and 6 these features are those of an edgefactored dependency parser in the generative models of 5 f has the form of a dependency model with valence all models for instance have a feature template that considers the parts of speech of a potential parentchild relationin order to benefit from the source language we also need to include bilingual features g when scoring a candidate target dependency link from word x y these features consider the relationship of their corresponding source words x0 and y0for instance the source tree t0 may contain the link x0 y0 which would cause a feature for monotonic projection to fire for the x y edgeif on the other hand y0 x0 e t0 a headswapping feature firesif x0 y0 ie x and y align to the same word the sameword feature firessimilar features fire when x0 and y0 are in grandparentgrandchild sibling ccommand or noneofthe above relationships or when y aligns to nullthese alignment classes are called configurations when training is conditioned on the target words we conjoin these configuration features with the part of speech and coarse part of speech of one or both of the source and target words ie the feature template has from one to four tagsin conditional training the exponentiated scores s are normalized by a constant z et expsfor the generative model the locally normalized generative process is explained in 534previous researchers have written fixup rules to massage the projected links after the fact and learned a parser from the resulting trees instead our models learn the necessary transformations that align and transform a source tree into a target treeother researchers have tackled the interesting task of learning parsers from unparsed bitext alone our methods take advantage of investments in highresource languages such as englishin work most closely related to this paper ganchev et al constrain the posterior distribution over targetlanguage dependencies to align to source dependencies some reasonable proportion of the time this approach performs well but cannot directly learn regular crosslanguage nonisomorphisms for instance some fixup rules for auxiliary verbs need to be introducedfinally huang et al use features somewhat like qg configurations on the shiftreduce actions in a monolingual targetlanguage parseras discussed in 1 the adaptation scenario is a special case of parser projection where the word alignments are onetoone and observedto test our handling of qg features we performed experiments in which training saw the correct parse trees in both source and target domains and the mapping between them was simple and regularwe also performed experiments where the source trees were replaced by the noisy output of a trained parser making the mapping more complex and harder to learnwe used the subset of the penn treebank from the conll 2007 shared task and converted it to dependency representation while varying two parameters conll vs prague coordination style and preposition the head vs the child of its nominal objectwe trained an edgefactored dependency parser on source domain data that followed one set of dependency conventionswe then trained an edgefactored parser with qg features on a small amount of target domain datathe source parser outputs were produced for all target data both training and test so that features for the target parser could refer to themin this task we know what the goldstandard source language parses are for any given text since we can produce them from the original penn treebankwe can thus measure the contribution of adaptation loss alone and the combined loss of imperfect sourcedomain parsing with adaptation when no target domain trees are available we simply have the performance of the source domain parser on this outofdomain datatraining a targetdomain parser on as few as 10 sentences shows substantial improvements in accuracyin the gold conditions where the target parser starts with perfect source trees accuracy approaches 100 in the realistic parse conditions where the targetdomain parser gets noisy sourcedomain parses the improvements are quite significant but approach a lower ceiling imposed by the performance of the source parser2 the adaptation problem in this section is a simple proof of concept of the qg approach however more complex and realistic adaptation problems existmonolingual adaptation is perhaps most obviously useful when the source parser is a blackbox or rulebased system or is trained on unavailable dataone might still want to use such a parser in some new context which might require new data or a new annotation standardwe are also interested in scenarios where we want to avoid expensive retraining on large reannotated treebankswe would like a linguist to be able to annotate a few trees according to a hypothesized theory and then quickly use qg adaptation to get a parser for that theoryone example would be adapting a constituency parser to produce dependency parseswe have concentrated here on adapting between two dependency parse styles in order to line up with the crosslingual tasks to which we now turnas in the adaptation scenario above many syntactic structures can be transferred from one language to anotherin this section we evaluate the extent of this direct projection on a small handannotated corpusin 5 we will use a qg generative model to learn dependency parsers from bitext when there are no annotations in the target languagefinally in 6we show how qg features can augment a targetlanguage parser trained on a small set of labeled treesfor syntactic annotation projection to work at all we must hypothesize or observe that at least some syntactic structures are preserved in translationhwa et al have called this intuition the direct correspondence assumption given a pair of sentences w and w that are translations of each other with syntactic structure t and t if nodes x and y of t are aligned with nodes x and y of t respectively and if syntactic relationship r holds in t then r holds in t the validity of this assumption clearly depends on the nodetonode alignment of the two treeswe again work in a dependency framework where syntactic nodes are simply lexical itemsthis allows us to use existing work on word alignmenthwa et al tested the dca under idealized conditions by obtaining handcorrected dependency parse trees of a few hundred sentences of spanishenglish and chineseenglish bitextthey also used humanproduced word alignmentssince their word alignments could be manytomany they gave a heuristic direct projection algorithm for resolving them into component dependency relationsit should be noted that this process introduced empty words into the projected target language tree and left words that are unaligned to english detached from the tree as a result they measured performance in dependency fscore rather than accuracywith manual english parses and word alignments this dpa achieved 368 fscore in spanish and 381 in chinesewith collinsmodel english parses and giza word alignments fscore was 339 for spanish and 263 for chinesecompare this to the spanish attachleft baseline of 310 and the chinese attachright baselines of 359these discouragingly low numbers led them to write languagespecific transformation rules to fix up the projected treesafter these rules were applied to the projections of automatic english parses fscore was 657 for english and 524 for chinesewhile these fscores were low it is useful to look at a subset of the alignment dependencies projected across onetoone alignments before the heuristic fixups had a much higher precision if lower recall than hwa et als final resultsusing hwa et als data we calculated that the precision of projection to spanish and chinese via these onetoone links was 65 there is clearly more information in these direct links than one would think from the fscoresto exploit this information however we need to overcome the problems of learning from partial trees when not all target words are attached and learning in the presence of the still considerable noise in the projected onetoone dependencieseg at least 28 error for spanish nonpunctuation dependencieswhat does this noise consist ofsome errors reflect fairly arbitrary annotation conventions in treebanks eg should the auxiliary verb govern the main verb or vice versaother errors arise from divergences in the complements required of certain head wordsin the germanenglish translation pair with coindexed words aligned an den libanon1 denken2 h remember2 lebanon1 we would prefer that the preposition an attach to denken even though the prepositions object libanon aligns to a direct child of rememberin other words we would like the grandparentparentchild chain of denken an libanon to align to the parentchild pair of remember lebanonfinally naturally occurring bitexts contain some number of free or erroneous translationsmachine translation researchers often seek to strike these examples from their training corpora free translations are not usually welcome from an mt systemfirst we consider the problem of parser projection when there are zero targetlanguage trees availableas in much other work on unsupervised parsing we try to learn a generative model that can predict targetlanguage sentencesour novel contribution is to condition the probabilities of the generative actions on the dependency parse of a sourcelanguage translationthus our generative model is a quasisynchronous grammar exactly as in 3 when training on target sentences w therefore we tune the model parameters to maximize not et p as in ordinary them but rather et pwe hope that this conditional them training will drive the model to posit appropriate syntactic relationships in the latent variable t becausethanks to the structure of the qg modelthat is the easiest way for it to exploit the extra information in t w to help predict w4 at test time t w are not made available so we just use the trained model to find argmaxt p backing off from the conditioning on t w and summing over abelow we present the specific generative model and some details of training we will then compare three approaches 532 a straight them baseline 533 a hard projection baseline 534 our conditional them approach above our base models of targetlanguage syntax are generative dependency models that have achieved stateofthe art results in unsupervised dependency structure inductionthe simplest version called dependency model with valence has been used in isolation and in combination with other models the dmv generates the right children and then independently the left children for each node in the dependency treenodes correspond to words which are represented by their partofspeech tagsat each step of generation the dmv stochastically chooses whether to stop generating conditioned on the currently generating head whether it is generating to the right or left and whether it has yet generated any children on that sideif it chooses to continue it then 4the contrastive estimation of smith and eisner also used a form of conditional them with similar motivationthey suggested that them grammar induction which learns to predict w unfortunately learns mostly to predict lexical topic or other properties of the training sentences that do not strongly require syntactic latent variablesto focus them on modeling the syntactic relationships they conditioned the prediction of w on almost complete knowledge of the lexical itemssimilarly we condition on a source translation of w furthermore our qg model structure makes it easy for them to learn to exploit the syntactic properties of that translation when predicting w stochastically generates the tag of a new child conditioned on the headthe parameters of the model are thus of the form where head and child are partofspeech tags dir e left right and adj stop e true falseroot is stipulated to generate a single right childbilingual configurations that condition on t w are incorporated into the generative process as in smith and eisner when the model is generating a new child for word x aligned to x it first chooses a configuration and then chooses a source word y in that configurationthe child y is then generated conditioned on its parent x most recent sibling a and its source analogue yas in previous work on grammar induction we learn the dmv from partofspeechtagged targetlanguage textwe use expectation maximization to maximize the likelihood of the datasince the likelihood function is nonconvex in the unsupervised case our choice of initial parameters can have a significant effect on the outcomealthough we could also try many random starting points the initializer in klein and manning performs quite wellthe base dependency parser generates the right dependents of a head separately from the left dependents which allows o dynamic programming for an nword target sentencesince the qg annotates nonterminals of the grammar with single nodes of t and we consider two nodes of t when evaluating the above dependency configurations qg parsing runs in o for an mword source sentenceif however we restrict candidate senses for a target child c to come from links in an ibm model 4 viterbi alignment we achieve o where k is the maximum number of possible words aligned to a given target language wordin practice k m and parsing is not appreciably slower than in the monolingual settingif all configurations were equiprobable the source sentence would provide no information to the targetin our qg experiments therefore we started with a bias towards direct parentchild links and a very small probability for breakages of localitythe values of other configuration parameters seem experimentally less important for insuring accurate learningour experiments compare learning on target language text to learning on parallel textin the latter case we compare learning from highprecision onetoone alignments alone to learning from all alignments using a qgour development and test data were drawn from the german tiger and spanish cast3lb treebanks as converted to projective dependencies for the conll 2007 shared task 5 our training data were subsets of the 2006 statistical machine translation workshop shared task in particular from the germanenglish and spanishenglish europarl parallel corpora the shared task provided prebuilt automatic giza word alignments which we used to facilitate replicabilitysince these word alignments do not contain posterior probabilities or null links nor do they distinguish which links are in the ibm model intersection we treated all links as equally likely when learning the qgtarget language words unaligned to any source language words were the only nodes allowed to align to null in qg derivationswe parsed the english side of the bitext with the projective dependency parser described by mcdonald et al trained on the penn treebank 220much previous work on unsupervised grammar induction has used goldstandard partofspeech tags while there are no goldstandard tags for the europarl bitext we did train a conditional markov 5we made one change to the annotation conventions in german in the dependencies provided words in a noun phrase governed by a preposition were all attached to that prepositionthis meant that in the phrase das kind in say subject position das was the child of kind but in fyou are das kind das was the child of fyou arethis seems to be a strange choice in converting from the tiger constituency format which does in fact annotate nps inside pps we have standardized prepositions to govern only the head of the noun phrasewe did not change any other annotation conventions to make them more like englishin the spanish treebank for instance control verbs are the children of their verbal complements in quiero decir quiero is the child of decirin german coordinations the coordinands all attach to the first but in english they all attach to the lastthese particular divergences in annotation style hurt all of our models equally these annotation divergences are one motivation for experiments below that include some target trees model tagger on a few thousand tagged sentencesthis is the only supervised data we used in the targetwe created versions of each training corpus with the first thousand ten thousand and hundred thousand sentence pairs each a prefix of the nextsince the targetlanguageonly baseline converged much more slowly we used a version of the corpora with sentences 15 target words or fewerusing the target side of the bitext as training data we initialized our model parameters as described in 52 and ran themwe checked convergence on a development set and measured unlabeled dependency accuracy on heldout test datawe compare performance to simple attachright and attach left baselines for mostly headfinal german the modify next baseline is better for mostly headinitial spanish modify previous winseven after several hundred iterations performance was slightly but not significantly better than the baseline for germanthem training did not beat the baseline for spanish6 the simplest approach to using the highprecision onetoone word alignments is labeled hard projection in the tablewe filtered the training corpus to find sentences where enough links were projected to completely determine a target language treeof course we needed to filter more than 1000 sentences of bitext to output 1000 training sentences in this waywe simply perform supervised training with this subset which is still quite noisy and performance quickly 6while these results are worse than those obtained previously for this model the experiments in klein and manning and only used sentences of 10 words or fewer without punctuation and with goldstandard tagspunctuation in particular seems to trip up the initializer since a sentencefinal periods appear in most sentences them often decides to make it the head plateausstill this method substantially improves over the baselines and unsupervised themrestricting ourselves to fully projected trees seems a waste of informationwe can also simply take all onetoone projected links impute expected counts for the remaining dependencies with them and update our modelsthis approach however performed worse than using only the fully projected treesin fact only the first iteration of them with this method made any improvement afterwards them degraded accuracy further from the numbers in table 3the quasisynchronous model used all of the alignments in reestimating its parameters and performed significantly better than hard projectionunlike them on the target language alone the qgs performance does not depend on a clever initializer for initial model weightsall parameters of the generative model except for the qg configuration features were initialized to zerosetting the prior to prefer direct correspondence provides the necessary bias to initialize learningerror analysis showed that certain types of dependencies eluded the qgs ability to learn from bitextthe spanish treebank treats some verbal complements as the heads of main verbs and auxiliary verbs as the children of participles the qg following the english learned the opposite dependency directionspanish treebank conventions for punctuation were also a common source of errorsin both german and spanish coordinations were often mishandled both treebanks attach the later coordinands and any conjunctions to the first coordinand the reverse is true in englishfinally in both german and spanish preposition attachments often led to errors which is not surprising given the unlexicalized targetlanguage grammarsrather than trying to adjudicate which dependencies are mere annotation conventions it would be useful to test learned dependency models on some extrinsic task such as relation extraction or machine translationfinally we consider the problem of parser projection when some target language trees are availableas in the adaptation case we train a conditional model of the target tree given the target sentence using the monolingual and bilingual qg features including configurations conjoined with tags outlined above for these experiments we used the ldcs englishchinese parallel treebank since manual word alignments also exist for a part of this corpus we were able to measure the loss in accuracy from the use of an automatic english parser and word alignerthe sourcelanguage english dependency parser was trained on the wall street journal where it achieved 91 dependency accuracy on development datahowever it was only 803 accurate when applied to our task the english side of the ectb7 after parsing the source side of the bitext we train a parser on the annotated target side using qg features described above both the monolingual targetlanguage parser and the projected parsers are trained to optimize conditional likelihood of the target trees t with ten iterations of stochastic gradient ascentin figure 3 we plot the performance of the targetlanguage parser on heldout bitextalthough projection performance is not surprisingly better if we know the true source trees at training and test time even with the 1best output of the source parser qg features help produce a parser as accurate asq one trained on twice the amount of monolingual datain ablation experiments we included bilingual features only for directly projected links with no features for headswapping grandparents etcwhen using 1best english parses parsers trained only with directprojection and monolingual features performed worse when using gold english parses parsers with directprojectiononly features performed better when trained with more chinese treesthe two related problems of parser adaptation and projection are often approached in different waysmany adaptation methods operate by simple augmentations of the target feature space as we have done here parser projection on the other hand often uses a multistage pipeline 7it would be useful to explore whether the techniques of 3 above could be used to improve english accuracy by domain adaptationin theory a model with qg features trained to perform well on chinese should not suffer from an inaccurate but consistent english parser but the results in figure 3 indicate a significant benefit to be had from better english parsing or from joint chineseenglish inference having twice as much data in the target languagenote that the penalty for using automatic alignments instead of gold alignments is negligible in fact using source text alone is often higher than gold alignmentsusing gold source trees however significantly outperforms using 1best source treesthe methods presented here move parser projection much closer in efficiency and simplicity to monolingual parsingwe showed that augmenting a target parser with quasisynchronous features can lead to significant improvementsfirst in experiments with adapting to different dependency representations in english and then in crosslanguage parser projectionas with many domain adaptation problems it is quite helpful to have some annotated target data especially when annotation styles vary our experiments show that unsupervised qg projection improves on parsers trained using only highprecision projected annotations and far outperforms by more than 35 absolute dependency accuracy unsupervised themwhen a small number of targetlanguage parse trees is available projection gives a boost equivalent to doubling the number of target treesthe loss in performance from conditioning only on noisy 1best source parses points to some natural avenues for improvementwe are exploring methods that incorporate a packed parse forest on the source side and similar representations of uncertainty about alignmentsbuilding on our recent belief propagation work we can jointly infer two dependency trees and their alignment under a joint distribution p that evaluates the full graph of dependency and alignment edges
D09-1086
parser adaptation and projection with quasisynchronous grammar featureswe connect two scenarios in structured learning adapting a parser trained on one corpus to another annotation style and projecting syntactic annotations from one language to anotherwe propose quasisynchronous grammar features for these structured learning tasksthat is we score an aligned pair of source and target trees based on local features of the trees and the alignmentour quasisynchronous model assigns positive probability to any alignment of any trees in contrast to asynchronous grammar which would insist on some form of structural parallelismin monolingual dependency parser adaptation we achieve high accuracy in translating among multiple annotation styles for the same sentenceon the more difficult problem of crosslingual parser projection we learn a dependency parser for a target language by using bilingual text an english parser and automatic word alignmentsour experiments show that unsupervised qg projection improves on parses trained using only highprecision projected annotations and far outperforms by more than 35 absolute dependency accuracy learning an unsupervised parser from raw targetlanguage text alonewhen a few targetlanguage parse trees are available projection gives a boost equivalent to doubling the number of targetlanguage treeswe think of crosslanguage adaptation as unsupervised projection using word aligned parallel text to construct training material for the target language
polylingual topic models topic models are a useful tool for analyzing large text collections but have previously been applied in only monolingual or at most bilingual contexts meanwhile massive collections of interlinked documents in dozens of languages such as wikipedia are now widely available calling for tools that can characterize content in many languages we introduce a polylingual topic model that discovers topics aligned across multiple languages we explore the models characteristics using two large corpora each with over ten different languages and demonstrate its usefulness in supporting machine translation and tracking topic trends across languages statistical topic models have emerged as an increasingly useful analysis tool for large text collectionstopic models have been used for analyzing topic trends in research literature inferring captions for images social network analysis in email and expanding queries with topically related words in information retrieval much of this work however has occurred in monolingual contextsin an increasingly connected world the ability to access documents in many languages has become both a strategic asset and a personally enriching experiencein this paper we present the polylingual topic model we demonstrate its utility and explore its characteristics using two polylingual corpora proceedings of the european parliament and a collection of wikipedia articles there are many potential applications for polylingual topic modelsalthough research literature is typically written in english bibliographic databases often contain substantial quantities of work in other languagesto perform topicbased bibliometric analysis on these collections it is necessary to have topic models that are aligned across languagessuch analysis could be significant in tracking international research trends where language barriers slow the transfer of ideasprevious work on bilingual topic modeling has focused on machine translation applications which rely on sentencealigned parallel translationshowever the growth of the internet and in particular wikipedia has made vast corpora of topically comparable textsdocuments that are topically similar but are not direct translations of one anotherconsiderably more abundant than ever beforewe argue that topic modeling is both a useful and appropriate tool for leveraging correspondences between semantically comparable documents in multiple different languagesin this paper we use two polylingual corpora to answer various critical questions related to polylingual topic modelswe employ a set of direct translations the europarl corpus to evaluate whether pltm can accurately infer topics when documents genuinely contain the same contentwe also explore how the characteristics of different languages affect topic model performancethe second corpus wikipedia articles in twelve languages contains sets of documents that are not translations of one another but are very likely to be about similar conceptswe use this corpus to explore the ability of the model both to infer similarities between vocabularies in different languages and to detect differences in topic emphasis between languagesthe internet makes it possible for people all over the world to access documents from different cultures but readers will not be fluent in this wide variety of languagesby linking topics across languages polylingual topic models can increase crosscultural understanding by providing readers with the ability to characterize the contents of collections in unfamiliar languages and identify trends in topic prevalencebilingual topic models for parallel texts with wordtoword alignments have been studied previously using the hmbitam model tam lane and schultz also show improvements in machine translation using bilingual topic modelsboth of these translationfocused topic models infer wordtoword alignments as part of their inference procedures which would become exponentially more complex if additional languages were addedwe take a simpler approach that is more suitable for topically similar document tuples in more than two languagesa recent extended abstract developed concurrently by ni et al discusses a multilingual topic model similar to the one presented herehowever they evaluate their model on only two languages and do not use the model to detect differences between languagesthey also provide little analysis of the differences between polylingual and singlelanguage topic modelsoutside of the field of topic modeling kawaba et al use a wikipediabased model to perform sentiment analysis of blog poststhey find for example that english blog posts about the nintendo wii often relate to a hack which cannot be mentioned in japanese posts due to japanese intellectual property lawsimilarly posts about whaling often use nationalist language in japanese and environmentalist language in englishthe polylingual topic model is an extension of latent dirichlet allocation for modeling polylingual document tupleseach tuple is a set of documents that are loosely equivalent to each other but written in different languages eg corresponding wikipedia articles in french english and germanpltm assumes that the documents in a tuple share the same tuplespecific distribution over topicsthis is unlike lda in which each document is assumed to have its own documentspecific distribution over topicsadditionally pltm assumes that each topic consists of a set of discrete distributions over wordsone for each language l 1 l in other words rather than using a single set of topics φ φ1 φt as in lda there are l sets of languagespecific topics φ1 φl each of which is drawn from a languagespecific symmetric dirichlet with concentration parameter βlanew document tuple w is generated by first drawing a tuplespecific topic distribution from an asymmetric dirichlet prior with concentration parameter α and base measure m then for each language l a latent topic assignment is drawn for each token in that language finally the observed tokens are themselves drawn using the languagespecific topic parameters wl p 11n φlwl zl the graphical model is shown in figure 1given a corpus of training and test document tuplesw and w respectivelytwo possible inference tasks of interest are computing the probability of the test tuples given the training tuples and inferring latent topic assignments for test documentsthese tasks can either be accomplished by averaging over samples of φ1 φl and αm from p or by evaluating a point estimatewe take the latter approach and use the map estimate for αm and the predictive distributions over words for φ1 φlthe probability of heldout document tuples w given training tuples w is then approximated by topic assignments for a test document tuple samplinggibbs sampling involves sequentially resampling each zln from its conditional posterior where zln is the current set of topic assignments for all other tokens in the tuple while ln is the number of occurrences of topic t in the tuple excluding zln the variable being resampledour first set of experiments focuses on document tuples that are known to consist of direct translationsin this case we can be confident that the topic distribution is genuinely shared across all languagesalthough direct translations in multiple languages are relatively rare we use direct translations to explore the characteristics of the modelthe europarl corpus consists of parallel texts in eleven western european languages danish german greek english spanish finnish french italian dutch portuguese and swedishthese texts consist of roughly a decade of proceedings of the european parliamentfor our purposes we use alignments at the speech level rather than the sentence level as in many translation tasks using this corpuswe also remove the twentyfive most frequent word types for efficiency reasonsthe remaining collection consists of over 121 million wordsdetails by language are shown in table 1es otros otras otro otra parte dems fi muiden toisaalta muita muut muihin muun fr autres autre part ctmt6 ailleurs m6me it altri altre altro altra dall parte nl andere anderzijds anderen ander als kant pt outros outras outro lado outra noutros sv andra sidan œ annat ena annan the concentration parameter α for the prior over documentspecific topic distributions is initialized to 001 t while the base measure m is initialized to the uniform distributionhyperparameters αm are reestimated every 10 gibbs iterationsfigure 2 shows the most probable words in all languages for four example topics from pltm with 400 topicsthe first topic contains words relating to the european central bankthis topic provides an illustration of the variation in technical terminology captured by pltm including the wide array of acronyms used by different languagesthe second topic concerning children demonstrates the variability of everyday terminology although the four romance languages are closely related they use etymologically unrelated words for childrenthe third topic demonstrates differences in inflectional variationenglish and the romance languages use only singular and plural versions of objective the other germanic languages include compound words while greek and finnish are dominated by inflected variants of the same lexical itemthe final topic demonstrates that pltm effectively clusters syntactic words as well as more semantically specific nouns adjectives and verbsalthough the topics in figure 2 seem highly focused it is interesting to ask whether the model is genuinely learning mixtures of topics or simply assigning entire document tuples to single topicsto answer this question we compute the posterior probability of each topic in each tuple under the trained modelif the model assigns all tokens in a tuple to a single topic the maximum posterior topic probability for that tuple will be near to 10if the model assigns topics uniformly the maximum topic probability will be near 1twe compute histograms of these maximum topic probabilities for t 50100 200 400 800for clarity rather than overlaying five histograms figure 3 shows the histograms converted into smooth curves using a kernel density estimator1 although there is a small bump around 10 values are generally closer to but greater than 1tmaximum topic probability in document although the posterior distribution over topics for each tuple is not concentrated on one topic it is worth checking that this is not simply because the model is assigning a single topic to the 1we use the r density function tokens in each of the languagesalthough the model does not distinguish between topic assignment variables within a given document tuple we can nevertheless divide topic assignment variables between languages and use them to estimate a dirichletmultinomial posterior distribution for each language in each tuplefor each tuple we can then calculate the jensenshannon divergence between these distributionsfigure 4 shows the density of these divergences for different numbers of topicsas with the previous figure there are a small number of documents that contain only one topic in all languages and thus have zero divergencethese tend to be very short formulaic parliamentary responses howeverthe vast majority of divergences are relatively low indicating that for each tuple the model is not simply assigning all tokens in a particular language to a single topicas the number of topics increases greater variability in topic distributions causes divergence to increasesmoothed histograms of interlanguage js divergence a topic model specifies a probability distribution over documents or in the case of pltm document tuplesgiven a set of training document tuples pltm can be used to obtain posterior estimates of φ φl and αmthe probability of previously unseen heldout document tuples given these estimates can then be computedthe higher the probability of the heldout document tuples the better the generalization ability of the modelanalytically calculating the probability of a set of heldout document tuples given φ1 φl and αm is intractable due to the summation over an exponential number of topic assignments for these heldout documentshowever recently developed methods provide efficient accurate estimates of this probabilitywe use the lefttoright method of we perform five estimation runs for each document and then calculate standard errors using a bootstrap methodtable 2 shows the log probability of heldout data in nats per word for pltm and lda both trained with 200 topicsthere is substantial variation between languagesadditionally the predictive ability of pltm is consistently slightly worse than that of ldait is important to note however that these results do not imply that lda should be preferred over pltmthat choice depends upon the needs of the modelerrather these results are intended as a quantitative analysis of the difference between the two modelsas the number of topics is increased the word counts per topic become very sparse in monolingual lda models proportional to the size of the vocabularyfigure 5 shows the proportion of all tokens in english and finnish assigned to each topic under lda and pltm with 800 topicsmore than 350 topics in the finnish lda model have zero tokens assigned to them and almost all tokens are assigned to the largest 200 topicsenglish has a larger tail with nonzero counts in all but 16 topicsin contrast pltm assigns a significant number of tokens to almost all 800 topics in very similar proportions in both languagespltm topics therefore have a higher granularity ie they are more specificthis result is important informally we have found that increasing the granularity of topics correlates strongly with user perceptions of the utility of a topic modelan important application for polylingual topic modeling is to use small numbers of comparable document tuples to link topics in larger collections of distinct noncomparable documents in multiple languagesfor example a journal might publish papers in english french german and italianno paper is exactly comparable to any other paper but they are all roughly topically similarif we wish to perform topicbased bibliometric analysis it is vital to be able to track the same topics across all languagesone simple way to achieve this topic alignment is to add a small set of comparable document tuples that provide sufficient glue to bind the topics togethercontinuing with the example above one might extract a set of connected wikipedia articles related to the focus of the journal and then train pltm on a joint corpus consisting of journal papers and wikipedia articlesin order to simulate this scenario we create a set of variations of the europarl corpus by treating some documents as if they have no parallelcomparable texts ie we put each of these documents in a singledocument tupleto do this we divide the corpus w into two sets of document tuples a glue set g and a separate set s such that g w p in other words the proportion of tuples in the corpus that are treated as glue is p for every tuple in s we assign each document in that tuple to a new singledocument tupleby doing this every document in s has its own distribution over topics independent of any other documentsideally the glue documents in g will be sufficient to align the topics across languages and will cause comparable documents in s to have similar distributions over topics even though they are modeled independentlyfr russie tchetchenie union avec russe region it ho presidente mi perche relazione votato lang topics at p 025 de rußland russland russischen tschetschenien ukraine en russia russian chechnya cooperation region belarus fr russie tchetchenie avec russe russes situation it russia unione cooperazione cecenia regione russa we train pltm with 100 topics on corpora with p e 1001 005 01 025 05we use 1000 iterations of gibbs sampling with q 001hyperparameters αm are reestimated every 10 iterationswe calculate the jensenshannon divergence between the topic distributions for each pair of individual documents in s that were originally part of the same tuple prior to separationthe lower the divergence the more similar the distributions are to each otherfrom the results in figure 4 we know that leaving all document tuples intact should result in a mean js divergence of less than 01table 3 shows mean js divergences for each value of p as expected js divergence is greater than that obtained when all tuples are left intactdivergence drops significantly when the proportion of glue tuples increases from 001 to 025example topics for p 001 and p 025 are shown in table 4at p 001 german and french both include words relating to russia while the english and italian word distributions appear locally consistent but unrelated to russiaat p 025 the top words for all four languages are related to russiathese results demonstrate that pltm is appropriate for aligning topics in corpora that have only a small subset of comparable documentsone area for future work is to explore whether initialization techniques or better representations of topic cooccurrence might result in alignment of topics with a smaller proportion of comparable textsalthough the pltm is clearly not a substitute for a machine translation systemit has no way to represent syntax or even multiword phrasesit is clear from the examples in figure 2 that the sets of high probability words in different languages for a given topic are likely to include translationswe therefore evaluate the ability of the pltm to generate bilingual lexica similar to other work in unsupervised translation modeling in the early statistical translation model work at ibm these representations were called cepts short for concepts we evaluate sets of highprobability words in each topic and multilingual synsets by comparing them to entries in humanconstructed bilingual dictionaries as done by haghighi et al unlike previous work we evaluate all words not just nounswe collected bilingual lexica mapping english words to german greek spanish french italian dutch and swedisheach lexicon is a set of pairs consisting of an english word and a translated word 1we wtwe do not consider multiword termswe expect that simple analysis of topic assignments for sequential words would yield such collocations but we leave this for future workfor every topic t we select a small number k of the most probable words in english and in each translation language wte and wtt respectivelywe then add the cartesian product of these sets for every topic to a set of candidate translations c we report the number of elements of c that appear in the reference lexicaresults for k 1 that is considering only the single most probable word for each language are shown in figure 6precision at this level is relatively high above 50 for spanish french and italian with t 400 and 800many of the candidate pairs that were not in the bilingual lexica were valid translations that simply were not in the lexicawe also do not count morphological variants the model finds en rules and de vorschriften but the lexicon contains only rule and vorschrift results remain strong as we increase k with k 3 t 800 1349 of the 7200 candidate pairs for spanish appeared in the lexicon topic in different languages translations of each otherthe number of such pairs that appear in bilingual lexica is shown on the yaxisfor t 800 the top english and spanish words in 448 topics were exact translations of one anotherin addition to enhancing lexicons by aligning topicspecific vocabulary pltm may also be useful for adapting machine translation systems to new domains by finding translations or near translations in an unstructured corpusthese aligned document pairs could then be fed into standard machine translation systems as training datato evaluate this scenario we train pltm on a set of document tuples from europarl infer topic distributions for a set of heldout documents and then measure our ability to align documents in one language with their translations in another languageit is not necessarily clear that pltm will be effective at identifying translationsin finding a lowdimensional semantic representation topic models deliberately smooth over much of the variation present in languagewe are therefore interested in determining whether the information in the documentspecific topic distributions is sufficient to identify semantically identical documentswe begin by dividing the data into a training set of 69550 document tuples and a test set of 17435 document tuplesin order to make the task more difficult we train a relatively coarsegrained pltm with 50 topics on the training setwe then use this model to infer topic distributions for each of the 11 documents in each of the heldout document tuples using a method similar to that used to calculate heldout probabilities finally for each pair of languages we calculate the difference between the topic distribution for each heldout document in the query language and the topic distribution for each heldout document in the target languagewe use both jensenshannon divergence and cosine distancefor each document in the query language we rank all documents in the target language and record the rank of the actual translationresults averaged over all querytarget language pairs are shown in figure 7 for jensenshannon divergencecosinebased rankings are significantly worseit is important to note that the length of documents mattersas noted before many of the documents in the europarl collection consist of short formulaic sentencesrestricting the querytarget pairs to only those with query and target documents that are both longer than 50 words results in significant improvement and reduced variance the average proportion of query documents for which the true translation is ranked highest goes from 539 to 727performance continues to improve with longer documents most likely due to better topic inferenceresults vary by languagetable 5 shows results for all target languages with english as a query languageagain english generally performs better with romance languages than germanic languagesdirectly parallel translations are rare in many languages and can be extremely expensive to producehowever the growth of the web and in particular wikipedia has made comparable text corpora documents that are topically similar but are not direct translations of one another considerably more abundant than true parallel corporain this section we explore two questions relating to comparable text corpora and polylingual topic modelingfirst we explore whether comparable document tuples support the alignment of finegrained topics as demonstrated earlier using parallel documentsthis property is useful for building machine translation systems as well as for human readers who are either learning new languages or analyzing texts in languages they do not knowsecond because comparable texts may not use exactly the same topics it becomes crucially important to be able to characterize differences in topic prevalence at the document level and at the languagewide level we downloaded xml copies of all wikipedia articles in twelve different languages welsh german greek english farsi finnish french hebrew italian polish russian and turkishthese versions of wikipedia were selected to provide a diverse range of language families geographic areas and quantities of textwe preprocessed the data by removing tables references images and infoboxeswe dropped all articles in nonenglish languages that did not link to an english articlein the english version of wikipedia we dropped all articles that were not linked to by any other language in our setfor efficiency we truncated each article to the nearest word after 1000 characters and dropped the 50 most common word types in each languageeven with these restrictions the size of the corpus is 1485 million wordswe present results for a pltm with 400 topics1000 gibbs sampling iterations took roughly four days on one cpu with current hardwareas with europarl we can calculate the jensenshannon divergence between pairs of documents within a comparable document tuplewe can then average over all such documentdocument divergences for each pair of languages to get an overall disagreement score between languagesinterestingly we find that almost all languages in our corpus including several pairs that have historically been in conflict show average js divergences of between approximately 008 and 012 for t 400 consistent with our findings for europarl translationssubtle differences of sentiment may be below the granularity of the model sadwrn blaned gallair at lloeren mytholeg space nasa sojus flug mission διαστημικό sts nasa αγγλ small space mission launch satellite nasa spacecraft sojuz nasa apollo ensimmšinen space lento spatiale mission orbite mars satellite spatial תינכות א רודכ לל ח ץר אה ללחה spaziale missione programma space sojuz stazione misja kosmicznej stacji misji space nasa космический союз космического спутник станции uzay soyuz ay uzaya salyut sovyetler sbaen madrid el la jos6 sbaeneg de spanischer spanischen spanien madrid la ισπανίας ισπανία de ισπανός ντε μαδρίτη de spanish spain la madrid y espanja de espanjan madrid la real espagnol espagne madrid espagnole juan y de spagna spagnolo spagnola madrid el de hiszpański hiszpanii la juan y де мадрид испании испания испанский de ispanya ispanyol madrid la koba real bardd gerddi iaith beirdd fardd gymraeg dichter schriftsteller literatur gedichte gedicht werk ποιητής ποίηση ποιητή έργο ποιητές ποιήματα poet poetry literature literary poems poem runoilija kirjailija kirjallisuuden kirjoitti runo julkaisi poste 6crivain litt6rature po6sie litt6raire ses overall these scores indicate that although individual pages may show disagreement wikipedia is on average consistent between languagesalthough we find that if wikipedia contains an article on a particular subject in some language the article will tend to be topically similar to the articles about that subject in other languages we also find that across the whole collection different languages emphasize topics to different extentsto demonstrate the wide variation in topics we calculated the proportion of tokens in each language assigned to each topicfigure 8 represents the estimated probabilities of topics given a specific languagecompetitive crosscountry skiing accounts for a significant proportion of the text in finnish but barely exists in welsh and the languages in the southeastern regionmeanwhile interest in actors and actresses is consistent across all languagesfinally historical topics such as the byzantine and ottoman empires are strong in all languages but show geographical variation interest centers around the empireswe introduced a polylingual topic model that discovers topics aligned across multiple languageswe analyzed the characteristics of pltm in comparison to monolingual lda and demonstrated that it is possible to discover aligned topicswe also demonstrated that relatively small numbers of topically comparable document tuples are sufficient to align topics between languages in noncomparable corporaadditionally pltm can support the creation of bilingual lexica for low resource language pairs providing candidate translations for more computationally intense alignment processes without the sentencealigned translations typically used in such taskswhen applied to comparable document collections such as wikipedia pltm supports datadriven analysis of differences and similarities across all languages for readers who understand any one languagethe authors thank limin yao who was involved in early stages of this projectthis work was supported in part by the center for intelligent information retrieval in part by the central intelligence agency the national security agency and national science foundation under nsf grant number iis0326249 and in part by army prime contract number w911nf0710216 and university of pennsylvania subaward number 103548106 and in part by national science foundation under nsf grant cns0619337any opinions findings and conclusions or recommendations expressed in this material are the authors and do not necessarily reflect those of the sponsor
D09-1092
polylingual topic modelstopic models are a useful tool for analyzing large text collections but have previously been applied in only monolingual or at most bilingual contextsmeanwhile massive collections of interlinked documents in dozens of languages such as wikipedia are now widely available calling for tools that can characterize content in many languageswe introduce a polylingual topic model that discovers topics aligned across multiple languageswe explore the models characteristics using two large corpora each with over ten different languages and demonstrate its usefulness in supporting machine translation and tracking topic trends across languageswe retrieve a list of potential translations simply by selecting a small number n of the most probable words in both languages and then add the cartesian product of these sets for every topic to a set of candidate translationswe show that so long as the proportion of topicallyaligned to nonaligned documents exceeded 025 the topic distributions does not degrade significantlywe extend the original concept of lda to support polylingual topic models both on parallel and partly comparable documents polylingual topic models learn topics for multiple languages creating tuples of language specific distributions over monolingual vocabularies for each topic
webscale distributional similarity and entity set expansion computing the pairwise semantic similarity between all words on the web is a computationally challenging task parallelization and optimizations are necessary we propose a highly scalable implementation based on distributional similarity implemented in the mapreduce framework and deployed over a 200 billion word crawl of the web the pairwise similarity between 500 million terms is computed in 50 hours using 200 quadcore nodes we apply the learned similarity matrix to the task of automatic set expansion and present a large empirical study to quantify the effect on expansion performance of corpus size corpus quality seed composition and seed size we make public an experimental testbed for set expansion analysis that includes a large collection of diverse entity sets extracted from wikipedia computing the semantic similarity between terms has many applications in nlp including word classification word sense disambiguation contextspelling correction fact extraction semantic role labeling and applications in ir such as query expansion and textual advertising for commercial engines such as yahoo and google creating lists of named entities found on the web is critical for query analysis document categorization and ad matchingcomputing term similarity is typically done by comparing cooccurrence vectors between all pairs of terms scaling this task to the web requires parallelization and optimizationsin this paper we propose a largescale term similarity algorithm based on distributional similarity implemented in the mapreduce framework and deployed over a 200 billion word crawl of the webthe resulting similarity matrix between 500 million terms is applied to the task of expanding lists of named entities we provide a detailed empirical analysis of the discovered named entities and quantify the effect on expansion accuracy of corpus size corpus quality seed composition and seed set sizebelow we review relevant work in optimizing similarity computations and automatic set expansionthe distributional hypothesis which links the meaning of words to their contexts has inspired many algorithms for computing term similarities brute force similarity computation compares all the contexts for each pair of terms with complexity o where n is the number of terms and m is the number of possible contextsmore efficient strategies are of three kinds smoothing techniques such as latent semantic analysis reduce the context space by applying truncated singular value decomposition computing the matrix decomposition however does not scale well to websize termcontext matricesother currently unscalable smoothing techniques include probabilistic latent semantic analysis iterative scaling and latent dirichlet allocation randomized algorithms randomized techniques for approximating various similarity measures have been successfully applied to term similarity common techniques include random indexing based on sparse distributed memory and locality sensitive hashing bayardo et al present a sparse matrix optimization strategy capable of efficiently computing the similarity between terms which is similarity exceeds a given thresholdrychlý and kilgarriff elsayed et al and agirre et al use reverse indexing and the mapreduce framework to distribute the similarity computations across several machinesour proposed approach combines these two strategies and efficiently computes the exact similarity between all pairsbuilding entity lexicons is a task of great interest for which structured semistructured and unstructured data have all been explored our own work focuses on set expansion from unstructured web textapart from the choice of a data source stateoftheart entity extraction methods differ in their use of numerous few or no labeled examples the open or targeted nature of the extraction as well as the types of features employedsupervised approaches rely on large sets of labeled examples perform targeted extraction and employ a variety of sentence and corpuslevel featureswhile very precise these methods are typically used for coarse grained entity classes for which large training data sets are availableunsupervised approaches rely on no labeled data and use either bootstrapped classspecific extraction patterns to find new elements of a given class or corpusbased term similarity to find term clusters finally semisupervised methods have shown great promise for identifying and labeling entities starting with a set of seed entities semisupervised extraction methods use either classspecific patterns to populate an entity class or distributional similarity to find terms similar to the seed set semisupervised methods are useful for extending finer grain entity classes for which large unlabeled data sets are availableprevious work has examined the effect of using large sometimes websize corpora on system performance in the case of familiar nlp tasksbanko and brill show that webscale data helps with confusion set disambiguation while lapata and keller find that the web is a good source of ngram counts for unsupervised modelsatterer and schutze examine the influence of corpus size on combining a supervised approach with an unsupervised one for relative clause and ppattachmentetzioni et al and pantel et al show the advantages of using large quantities of generic web text over smaller corpora for extracting relations and named entitiesoverall corpus size and quality are both found to be important for extractionour paper adds to this body of work by focusing on the task of similaritybased set expansion and providing a large empirical study quantify the relative corpus effectsprevious extraction systems report on the size and quality of the training data or if semisupervised the size and quality of entity or pattern seed setsnarrowing the focus to closely related work paşca and paşca and durme show the impact of varying the number of instances representative of a given class and the size of the attribute seed set on the precision of class attribute extractionan example observation is that good quality class attributes can still be extracted using 20 or even 10 instances to represent an entity classamong others etzioni et al shows that a small pattern set can help bootstrap useful entity seed sets and reports on the impact of seed set noise on final performanceunlike previous work empirically quantifying the influence of seed set size and quality on extraction performance of random entity types is a key objective of this paperterm semantic models normally invoke the distributional hypothesis which links the meaning of terms to their contextsmodels are built by recording the surrounding contexts for each term in a large collection of unstructured text and storing them in a termcontext matrixmethods differ in their definition of a context or by a means to weigh contexts or ultimately in measuring the similarity between two context vectors in this paper we adopt the following methodology for computing term similarityour various web crawls described in section 61 are postagged using brills tagger and chunked using a variant of the abney chunker terms are np chunks with some modifiers removed their contexts are defined as their rightmost and leftmost stemmed chunkswe weigh each context f using pointwise mutual information let pmi denote a pointwise mutual information vector constructed for each term as follows pmi where pmiwf is the pointwise mutual information between term w and feature f where cwf is the frequency of feature f occurring for term w n is the number of unique terms and n is the total number of features for all termsterm similarities are computed by comparing these pmi context vectors using measures such as cosine jaccard and dicecomputing the similarity between terms on a large web crawl is a nontrivial problem with a worst case cubic running time o where n is the number of terms and m is the dimensionality of the feature spacesection 21 introduces several optimization techniques below we propose an algorithm for largescale term similarity computation which calculates exact scores for all pairs of terms generalizes to several different metrics and is scalable to a large crawl of the webour optimization strategy follows a generalized sparsematrix multiplication approach which is based on the wellknown observation that a scalar product of two vectors depends only on the coordinates for which both vectors have nonzero valuesfurther we observe that most commonly used similarity scores for feature vectors x and y such as cosine and dice can be decomposed into three values one depending only on features of x another depending only on features of y and the third depending on the features shared both by x and ymore formally commonly used similarity scores f can be expressed as and f3 for some common similarity functionsfor each of these scores f2 f3in our work we compute all of these scores but report our results using only the cosine functionlet a and b be two matrices of pmi feature vectorsour task is to compute the similarity between all vectors in a and all vectors in bin computing the similarity between all pairs of terms a bfigure 1 outlines our algorithm for computing the similarity between all elements of a and befficient computation of the similarity matrix can be achieved by leveraging the fact that is determined solely by the features shared by and an d that most of the feature vectors are very sparse in this case calculating f1 is only required when both feature vectors have a shared nonzero feature significantly reducing the cost of computationdetermining which vectors share a nonzero feature can easily be achieved by first building an inverted index for the featuresthe computational cost of this algorithm is 2 ni where ni is the number of vectors that have a nonzero ith coordinateits worst case time complexity is o where n is the number of terms to be compared c is the maximum number of nonzero coordinates of any vector and v is the number of vectors that have a nonzero ith coordinate where i is the coordinate which is nonzero for the most vectorsin other words the algorithm is efficient only when the density of the coordinates is lowon our datasets we observed near linear running time in the corpus sizebayardo et al described a strategy that potentially reduces the cost even further by omitting the coordinates with the highest number of nonzero valuehowever their algorithm gives a significant advantage only when we are interested in finding solely the similarity between highly similar termsin our experiments we compute the exact similarity between all pairs of termsthe pseudocode in figure 1 assumes that a can fit into memory which for large a may be impossiblealso as each element of b is processed independently running parallel processes for nonintersecting subsets of b makes the processing fasterin this section we outline our mapreduce implementation of figure 1 deployed using hadoop1 the opensource software package implementing the mapreduce framework and distributed file systemhadoop has been shown to scale to several thousands of machines allowing users to write simple map and reduce code and to seamlessly manage the sophisticated parallel execution of the codea good primer on mapreduce programming is in our implementation employs the mapreduce model by using the map step to start mn map tasks in parallel each caching 1mth part of a as an inverted index and streaming 1nth part of b through itthe actual inputs are read by the tasks input two matrices a and b of feature vectors build an inverted index for a for k in nonzero features of ai if k not in aa aak emptyset append pairs to the set of nonzero values for feature k directly from hdfs each part of a is processed n times and each part of b is processed m timesm is determined by the amount of memory dedicated for the inverted index and n should be determined by trading off the fact that as n increases more parallelism can be obtained at the increased cost of building the same inverse index n timesthe similarity algorithm from figure 1 is run in each task of the map step of a mapreduce jobthe reduce step is used to group the output by bicreating lists of named entities is a critical problem at commercial engines such as yahoo and googlethe types of entities to be expanded are often not known a priori leaving supervised classifiers undesirableadditionally list creators typically need the ability to expand sets of varying granularitysemisupervised approaches are predominantly adopted since they allow targeted expansions while requiring only small sets of seed entitiesstateoftheart techniques first compute termterm similarities for all available terms and then select candidates for set expansion from amongst the terms most similar to the seeds formally we define our expansion task as task definition given a set of seed entities s s1 s2 sk of a class c s1 s2 sk sn and an unlabeled textual corpus t find all members of the class c for example consider the class of bottled water brandsgiven the set of seeds s volvic san pellegrino gerolsteiner brunnen bling h2o our task is to find all other members of this class such as agua vida apenta culligan dasani ethos water iceland pure spring water imsdal our goal is not to propose a new set expansion algorithm but instead to test the effect of using our webscale term similarity matrix on a stateoftheart distributional set expansion algorithm namely we consider s as a set of prototypical examples of the underlying entity seta representation for the meaning of s is computed by building a feature vector consisting of a weighted average of the features of its seed elements s1 s2 sk a centroidfor example given the seed elements volvic san pellegrino gerolsteiner brunnen bling h2o the resulting centroid consists of brand mineral water monitor lake water take over centroids are represented in the same space as terms allowing us to compute the similarity between centroids and all terms in our corpusa scored and ranked set for expansion is ultimately generated by sorting all terms according to their similarity to the seed set centroid and applying a cutoff on either the similarity score or on the total number of retrieved termsin our reported experiments we expanded over 22000 seed sets using our web similarity model from section 3in this section we describe our methodology for evaluating webscale set expansionestimating the quality of a set expansion algorithm requires a random sample from the universe of all entity sets that may ever be expanded where a set represents some concept such as stage actorsan approximation of this universe can be extracted from the list of pages in wikipedia2upon inspection of a random sample of the list of pages we found that several lists were compositions or joins of concepts for example list of world war ii aces from denmark and list of people who claimed to be godwe addressed this issue by constructing a quasirandom sample as followswe randomly sorted the list of every noun occurring in wikipedia2then for each noun we verified whether or not it existed in a wikipedia list and if so we extracted this listif a noun belonged to multiple lists the authors chose the list that seemed most appropriatealthough this does not generate a perfect random sample diversity is ensured by the random selection of nouns and relevancy is ensured by the author adjudicationthe final gold standard consists of 50 sets including classical pianists spanish provinces texas counties male tennis players first ladies cocktails bottled water brands and archbishops of canterburyfor each set we then manually scraped every instance from wikipedia keeping track also of the listed variants namesthe gold standard is available for download at httpwwwpatrickpantelcomcgibinwebtoolsgetfilepltypedataidssegoldwikipedia20071218goldsetstgz the 50 sets consist on average of 208 instances for a total of 10377 instancesin order to analyze the corpus and seed effects on performance we created 30 copies of each of the 50 sets and randomly sorted each copythen for each of the 1500 copies we created a trial for each of the following 23 seed sizes 1 2 5 10 20 30 40 200each trial of seed size s was created by taking the first s entries in each of the 1500 random copiesfor sets that contained fewer than 200 items we only generated trials for seed sizes smaller than the set sizethe resulting trial dataset consists of 20220 trials3set expansion systems consist of an expansion algorithm as well as a corpus for a given system each of the 20220 trials described in the previous section are expandedin our work we limited the total number of system expansions per trial to 1000before judgment of an expanded set we first collapse each instance that is a variant of another into one single instance 4then each expanded instance is judged as correct or incorrect automatically against the gold standard described in section 51our experiments in section 6 consist of precision vs recall or precision vs rank curves where a precision is defined as the percentage of correct instances in the expansion of a seed set and b recall is defined as the percentage of nonseed gold standard instances retrieved by the systemsince the gold standard sets vary significantly in size we also provide the rprecision metric to normalize for set size for the above metrics 95 confidence bounds are computed using the randomly generated samples described in section 52our goal is to study the performance gains on set expansion using our webscale term similarity algorithm from section 3we present a large empirical study quantifying the importance of corpus and seeds on expansion accuracywe extracted statistics to build our model from section 3 using four different corpora outlined in table 2the wikipedia corpus consists of a snapshot of the english articles in december 20085the web100 corpus consists of an extraction from a large crawl of the web from yahoo of over 600 million english webpagesfor each crawled document we removed paragraphs containing fewer than 50 tokens and then removed all duplicate sentencesthe resulting corpus consists of over 200 billion wordsthe web020 corpus is a random sample of 15th of the sentences in web100 whereas web004 is a random sample of 125th of web100for each corpus we tagged and chunked each sentence as described in section 3we then computed the similarity between all noun phrase chunks using the model of section 31our proposed optimization for term similarity computation produces exact scores for all pairs of terms on a large web crawlfor our largest corpus web100 we computed the pairwise similarity between over 500 million words in 50 hours using 200 fourcore machinesweb004 is of similar scale to the largest reported randomized technique on this scale we compute the exact similarity matrix in a little over two hours whereas ravichandran et al compute an approximation in 570 hourson average they only find 73 5 to avoid biasing our wikipedia corpus with the test sets wikipedia list of pages were omitted from our statistics as were any page linked to gold standard list members from list of pages of the top1000 similar terms of a random term whereas we find all of themfor set expansion experiments have been run on corpora as large as web004 and wikipedia a corpora 300 times smaller than our web crawlbelow we compare the expansion accuracy of sarmento et al on wikipedia and our web crawlsfigure 2 illustrates the precision and recall tradeoff for our four corpora with 95 confidence intervals computed over all 20220 trials described in section 42table 3 lists the resulting rprecision along with the system precisions at ranks 25 50 and 100 why are the precision scores so lowcompared with previous work that manually select entity types for expansion such as countries and companies our work is the first to evaluate over a large set of randomly selected entity typeson just the countries class our rprecision was 0816 using web100the following sections analyze the effects of various expansion variables corpus size corpus quality seed size and seed qualitynot surprisingly corpus size and quality have a significant impact on expansion performancefigure 2 and table 3 quantify this expectationon our web crawl corpora we observe that the full 200 billion token crawl has an average rprecision 13 higher than 15th of the crawl and 53 higher than 125th of the crawlfigure 2 also illustrates that throughout the full precisionrecall curve web100 significantly outperforms web020 which in turn significantly outperforms web004the higher text quality wikipedia corpus which consists of roughly 60 times fewer tokens than web020 performs nearly as well as web020 we omitted statistics from wikipedia list of pages in order to not bias our evaluation to the test set described in section 51inspection of the precision vs rank graph revealed that from rank 1 thru 550 wikipedia had the same precision as web020from rank 550 to 1000 however wikipedias precision dropped off significantly compared with web020 accounting for the fact that the web corpus contains a higher recall of gold standard instancesthe rprecision reported in table 3 shows that this precision dropoff results in a significantly lower rprecision for wikipedia compared with web020intuitively some seeds are better than otherswe study the impact of seed selection effect by inspecting the system performance for several randomly selected seed sets of fixed size and we find that seed set composition greatly affects performancefigure 3 illustrates the precision vs recall tradeoff on our best performing corpus web100 for 30 random seed sets of size 10 for each of our 50 gold standard sets each of the trials performed better than the average system performance distinguishing between the various data series is not important however important to notice is the very large gap between the precisionrecall curves of the best and worst performing random seed setson average the best performing seed sets had 42 higher precision and 39 higher recall than the worst performing seed setsimilar curves were observed for inspected seed sets of size 5 20 30 and 40although outside of the scope of this paper we are currently investigating ways to automatically detect which seed elements are better than others in order to reduce the impact of seed selection effecthere we aim to confirm with a large empirical study the anecdotal claims in that few seeds are necessarywe found that a very small seed sets of size 1 or 2 are not sufficient for representing the intended entity set b 520 seeds yield on average best performance and c surprisingly increasing the seed set size beyond 20 or 30 on average does not find any new correct instanceswe inspected the effect of seed size on rprecision over the four corporaeach seed size curve is computed by averaging the system performance over the 30 random trials of all 50 setsfor each corpus rprecision increased sharply from seed size 1 to 10 and the curve flattened out for seed sizes larger than 20 error analysis on the web100 corpus shows that once our model has seen 1020 seeds the distributional similarity model seems to have enough statistics to discover as many new correct instances as it could ever findsome entities could never be found by the distributional similarity model since they either do not occur or infrequently occur in the corpus or they occur in contexts that vary a great deal from other set elementsfigure 4 illustrates this behavior by plotting for each seed set size the rate of increase in discovery of new correct instances we see that most gold standard instances are discovered with the first 510 seedsafter the 30th seed is introduced no new correct instances are foundan important finding is that the error rate does not increase with increased seed set size this study shows that only few seeds yield best performance and that adding more seeds beyond this does not on average affect performance in a positive or negative waywe proposed a highly scalable term similarity algorithm implemented in the mapreduce framework and deployed over a 200 billion word crawl of the webthe pairwise similarity between 500 million terms was computed in 50 hours using 200 quadcore nodeswe evaluated the impact of the large similarity matrix on a set expansion task and found that the web similarity matrix gave a large performance boost over a stateoftheart expansion algorithm using wikipediafinally we release to the community a testbed for experimentally analyzing automatic set expansion which includes a large collection of nearly random entity sets extracted from wikipedia and over 22000 randomly sampled seed expansion trials
D09-1098
webscale distributional similarity and entity set expansioncomputing the pairwise semantic similarity between all words on the web is a computationally challenging taskparallelization and optimizations are necessarywe propose a highly scalable implementation based on distributional similarity implemented in the mapreduce framework and deployed over a 200 billion word crawl of the webthe pairwise similarity between 500 million terms is computed in 50 hours using 200 quadcore nodeswe apply the learned similarity matrix to the task of automatic set expansion and present a large empirical study to quantify the effect on expansion performance of corpus size corpus quality seed composition and seed sizewe make public an experimental testbed for set expansion analysis that includes a large collection of diverse entity sets extracted from wikipediaour dash stores the case for each phrase in wikipediawe find that 10 to 20 seeds are a sufficient starting set in a distributional similarity model to discover as many new correct instances as may ever be foundgiven the seeds set s a seeds centroid vector is produced using the surrounding word contexts of all occurrences of all the seeds in the corpus
supervised models for coreference resolution traditional learningbased coreference reoperate by training a mentionfor determining whether two mentions are coreferent or not two independent lines of recent research have attempted to improve these mentionpair one by learning a mentionto rank preceding mentions for a given anaphor and the other training an to determine whether a preceding cluster is coreferent with a given mention we propose a clusterranking approach to coreference resolution that combines the strengths of mention rankers and entitymention models we additionally show how our clusterranking framework naturally allows discoursenew entity detection to be learned jointly with coreference resolution experimental results on the ace data sets demonstrate its superior performance to competing approaches noun phrase coreference resolution is the task of identifying which nps refer to the same realworld entity or concepttraditional learningbased coreference resolvers operate by training a model for classifying whether two mentions are coreferring or not ng and cardie kehler et al ponzetto and strube despite their initial successes these mentionpair models have at least two major weaknessesfirst since each candidate antecedent for a mention to be resolved is considered independently of the others these models only determine how good a candidate antecedent is relative to the active mention but not how good a candidate antecedent is relative to other candidatesin other words they fail to answer the critical question of which candidate antecedent is most probablesecond they have limitations in their expressiveness the information extracted from the two mentions alone may not be sufficient for making an informed coreference decision especially if the candidate antecedent is a pronoun or a mention that lacks descriptive information such as gender to address the first weakness researchers have attempted to train a mentionranking model for determining which candidate antecedent is most probable given an active mention ranking is arguably a more natural reformulation of coreference resolution than classification as a ranker allows all candidate antecedents to be considered simultaneously and therefore directly captures the competition among themanother desirable consequence is that there exists a natural resolution strategy for a ranking approach a mention is resolved to the candidate antecedent that has the highest rankthis contrasts with classificationbased approaches where many clustering algorithms have been employed to coordinate the pairwise coreference decisions to address the second weakness researchers have investigated the acquisition of entitymention coreference models yang et alunlike mentionpair models these entitymention models are trained to determine whether an active mention belongs to a preceding possibly partiallyformed coreference clusterhence they can employ clusterlevel features which makes them more expressive than mentionpair modelsmotivated in part by these recently developed models we propose in this paper a clusterranking approach to coreference resolution that combines the strengths of mentionranking models and entitymention modelsspecifically we recast coreference as the problem of determining which of a set of preceding coreference clusters is the best to link to an active mention using a learned cluster rankerin addition we show how discoursenew detection can be learned jointly with coreference resolution in our clusterranking frameworkit is worth noting that researchers typically adopt a pipeline coreference architecture performing discoursenew detection prior to coreference resolution and using the resulting information to prevent a coreference system from resolving mentions that are determined to be discoursenew for an overviewas a result errors in discoursenew detection could be propagated to the resolver possibly leading to a deterioration of coreference performance jointly learning discoursenew detection and coreference resolution can potentially address this errorpropagation problemin sum we believe our work makes three main contributions to coreference resolution proposing a simple yet effective coreference modelour work advances the stateoftheart in coreference resolution by bringing learningbased coreference systems to the next level of performancewhen evaluated on the ace 2005 coreference data sets cluster rankers outperform three competing models mentionpair entitymention and mentionranking models by a large marginalso our jointlearning approach to discoursenew detection and coreference resolution consistently yields cluster rankers that outperform those adopting the pipeline architectureequally importantly cluster rankers are conceptually simple and easy to implement and do not rely on sophisticated training and inference procedures to make coreference decisions in dependent relation to each other unlike relational coreference models bridging the gap between machinelearning approaches and linguisticallymotivated approaches to coreference resolutionwhile machine learning approaches to coreference resolution have received a lot of attention since the mid90s popular learningbased coreference frameworks such as the mentionpair model are arguably rather unsatisfactory from a linguistic point of viewin particular they have not leveraged advances in discoursebased anaphora resolution research in the 70s and 80sour work bridges this gap by realizing in a new machine learning framework ideas rooted in lappin and leasss heuristicbased pronoun resolver which in turn was motivated by classic saliencebased approaches to anaphora resolutionrevealing the importance of adopting the right modelwhile entitymention models have previously been shown to be worse or at best marginally better than their mentionpair counterparts our clusterranking models which are a natural extension of entitymention models significantly outperformed all competing approachesthis suggests that the use of an appropriate learning framework can bring us a long way towards highperformance coreference resolutionthe rest of the paper is structured as followssection 2 discusses related worksection 3 describes our baseline coreference models mentionpair entitymention and mentionrankingwe discuss our clusterranking approach in section 4 evaluate it in section 5 and conclude in section 6heuristicbased cluster rankingas mentioned previously the work most related to ours is lappin and leass whose goal is to perform pronoun resolution by assigning an anaphoric pronoun to the highestscored preceding clusternevertheless lappin and leasss work differs from ours in several respectsfirst they only tackle pronoun resolution rather than the full coreference tasksecond their algorithm is heuristicbased in particular the score assigned to a preceding cluster is computed by summing over the weights associated with the factors applicable to the cluster where the weights are determined heuristically rather than learned unlike ourslike many heuristicbased pronoun resolvers they first apply a set of constraints to filter grammatically incompatible candidate antecedents and then rank the remaining ones using salience factorsas a result their clusterranking model employs only factors that capture the salience of a cluster and can therefore be viewed as a simple model of attentional state realized by coreference clustersby contrast our resolution strategy is learned without applying handcoded constraints in a separate filtering stepin particular we attempt to determine the compatibility between a cluster and an active mention using factors that determine not only salience but also lexical and grammatical compatibility for instanceentitymention coreference modelsluo et al represent one of the earliest attempts to investigate learningbased entitymention modelsthey use the any predicate to generate clusterlevel features as follows given a binaryvalued feature x defined over a pair of mentions they introduce an anyx clusterlevel feature which has the value true if x is true between the active mention and any mention in the preceding cluster under considerationcontrary to common wisdom this entitymention model underperforms its mentionpair counterpart in spite of the generalization from mentionpair to clusterlevel featuresin yang et als entitymention model a training instance is composed of an active mention mk a preceding cluster c and a mention mj in c that is closest in distance to mk in the associated textthe feature set used to represent the instance is primarily composed of features that describe the relationship between mj and mk as well as a few clusterlevel featuresin other words the model still relies heavily on features used in a mentionpair modelin particular the inclusion of mj in the feature vector representation to some extent reflects the authors lack of confidence that a strong entitymention model can be trained without mentionpairbased featuresour ranking model on the other hand is trained without such featuresmore recently yang et al have proposed another entitymention model trained by inductive logic programminglike their previous work the scarcity of clusterlevel predicates underexploits the expressiveness of entitymention modelsmention rankingthe notion of ranking candidate antecedents can be traced back to centering algorithms many of which use grammatical roles to rank forwardlooking centers walker et al and mitkov however mention ranking has been employed in learningbased coreference resolvers only recentlyas mentioned before denis and baldridge train a mentionranking modeltheir work can be viewed as an extension of yang et als twincandidate coreference model which ranks only two candidate antecedents at a timeunlike ours however their model ranks mentions rather than clusters and relies on an independentlytrained discoursenew detectordiscoursenew detectiondiscoursenew detection is often tackled independently of coreference resolutionpleonastic its have been detected using heuristics and learningbased techniques such as rule learning kernels and distributional methods nonanaphoric definite descriptions have been detected using heuristics and unsupervised methods general discoursenew detectors that are applicable to different types of nps have been built using heuristics and modeled generatively and discriminatively there have also been attempts to perform joint inference for discoursenew detection and coreference resolution using integer linear programming where a discoursenew classifier and a coreference classifier are trained independently of each other and then ilp is applied as a postprocessing step to jointly infer discoursenew and coreference decisions so that they are consistent with each other joint inference is different from our jointlearning approach which allows the two tasks to be learned jointly and not independentlyin this section we describe three coreference models that will serve as our baselines the mentionpair model the entitymention model and the mentionranking modelfor illustrative purposes we will use the text segment shown in figure 1each mention m in the segment is annotated as mcidmid where mid is the mention id and cid is the id of the cluster to which m belongsas we can see the mentions are partitioned into four sets with barack obama his and he in one cluster and each of the remaining mentions in its own clusteras noted before a mentionpair model is a classifier that decides whether or not an active mention mk is coreferent with a candidate antecedent mjeach instance i represents mj and mk and consists of the 39 features shown in table 1these features have largely been employed by stateoftheart learningbased coreference systems ng and cardie bengtson and roth and are computed automaticallyas can be seen the features are divided into four blocksthe first two blocks consist of features that describe the properties of mj and mk respectively and the last two blocks of features describe the relationship between mj and mkthe classification associated with a training instance is either positive or negative depending on whether mj and mk are coreferentif one training instance were created from each pair of mentions the negative instances would significantly outnumber the positives yielding a skewed class distribution that will typically have an adverse effect on model trainingas a result only a subset of mention pairs will be generated for trainingfollowing soon et al we create a positive instance for each discourseold mention mk and its closest antecedent mj and a negative instance for mk paired with each of the intervening mentions mj1 mj2 mk1in our running example shown in figure 1 three training instances will be generated for he i i and ithe first two of these instances will be labeled as negative and the last one will be labeled as positiveto train a mentionpair classifier we use the svm learning algorithm from the svmlight package converting all multivalued features into an equivalent set of binaryvalued featuresafter training the resulting svm classifier is used to identify an antecedent for a mention in a test textspecifically an active mention mk selects as its antecedent the closest preceding mention that is classified as coreferent with mkif mk is not classified as coreferent with any preceding mention it will be considered discoursenew unlike a mentionpair model an entitymention model is a classifier that decides whether or not an active mention mk is coreferent with a partial cluster cj that precedes mkeach training instance i represents cj and mkthe features for an instance can be divided into two types features that describe mk and clusterlevel features which describe the relationship between cj and mkmotivated by previous work we create clusterlevel features from mentionpair features using four predicates none mostfalse mosttrue and allspecifically for each feature x shown in the last two blocks in table 1 we first convert x into an equivalent set of binaryvalued features if it is multivaluedthen for each resulting binaryvalued feature xb we create four binaryvalued clusterlevel features nonexb is true when xb is false between mk and each mention in cj mostfalsexb is true when xb is true between mk and less than half of the mentions in cj mosttruexb is true when xb is true between mk and at least half of the mentions in cj and allxb is true when xb is true between mk and each mention in cjhence for each xb exactly one of these four clusterlevel features evaluates to truefollowing yang et al we create a positive instance for each discourseold mention mk and the preceding cluster cj to which it belongs and a negative instance for mk paired with each partial cluster whose last mention appears between mk and its closest antecedent consider again our running examplethree training instances will be generated for he i i and ithe first two of these instances will be labeled as negative and the last one will be labeled as positiveas in the mentionpair model we train an entitymention classifier using the svm learnerafter training the resulting classifier is used to identify a preceding cluster for a mention in a test textspecifically the mentions are processed in a lefttoright mannerfor each active mention mk a test instance is created between mk and each of the preceding clusters formed so farall the test instances are then presented to the classifierfinally mk will be linked to the closest preceding cluster that is classified as coreferent with mkif mk is not classified as coreferent with any preceding cluster it will be considered discoursenewnote that all partial clusters preceding mk are formed incrementally based on the predictions of the classifier for the first k 1 mentionsas noted before a ranking model imposes a ranking on all the candidate antecedents of an active mention mkto train a ranker we use the svm rankerlearning algorithm from the svmlzght packagelike the mentionpair model each training instance i represents mk and a preceding mention mjin fact the features that represent the instance as well as the method for creating training instances are identical to those employed by the mentionpair modelthe only difference lies in the assignment of class values to training instancesassuming that sk is the set of training instances created for anaphoric mention mk the class value for an instance i in sk is the rank of mj among competing candidate antecedents which is 2 if mj is the closest antecedent of mk and 1 otherwise1 to exemplify consider our running exampleas in the mentionpair model three training instances will be generated for he i i ithe third instance will have a class value of 2 and the remaining two will have a class value of 1after training the mentionranking model is applied to rank the candidate antecedents for an active mention in a test text as followsgiven an active mention mk we follow denis and baldridge and use an independentlytrained classifier to determine whether mk is discoursenewif so mk will not be resolvedotherwise we create test instances for mk by pairing it with each of its preceding mentionsthe test instances are then presented to the ranker and the preceding mention that is assigned the largest value by the ranker is selected as the antecedent of mkthe discoursenew classifier used in the resolution step is trained with 26 of the 37 features2 described in ng and cardie that are deemed useful for distinguishing between anaphoric and nonanaphoric mentionsthese features can be broadly divided into two types features that encode the form of the mention and features that compare the mention to one of its preceding mentionsin this section we describe our clusterranking approach to np coreferenceas noted before our approach aims to combine the strengths of entitymention models and mentionranking modelsfor ease of exposition we will describe in this subsection how to train and apply a cluster ranker when it is used in a pipeline architecture where discoursenew detection is performed prior to coreference resolutionin the next subsection we will show how the two tasks can be learned jointlyrecall that a cluster ranker ranks a set of preceding clusters for an active mention mksince a cluster ranker is a hybrid of a mentionranking model and an entitymention model the way it is trained and applied is also a hybrid of the twoin particular the instance representation employed by a cluster ranker is identical to that used by an entitymention model where each training instance i represents a preceding cluster cj and a discourseold mention mk and consists of clusterlevel features formed from predicatesunlike in an entitymention model however in a cluster ranker a training instance is created between each discourseold mention mk and each of its preceding clusters and since we are training a model for ranking clusters the assignment of class values to training instances is similar to that of a mention rankerspecifically the class value of a training instance i created for mk is the rank of cj among the competing clusters which is 2 if mk belongs to cj and 1 otherwiseapplying the learned cluster ranker to a test text is similar to applying a mention rankerspecifically the mentions are processed in a lefttoright mannerfor each active mention mk we first apply an independentlytrained classifier to determine if mk is discoursenewif so mk will not be resolvedotherwise we create test instances for mk by pairing it with each of its preceding clustersthe test instances are then presented to the ranker and mk is linked to the cluster that is assigned the highest value by the rankernote that these partial clusters preceding mk are formed incrementally based on the predictions of the ranker for the first k1 mentions no goldstandard coreference information is used in their formationthe cluster ranker described above can be used to determine which preceding cluster a discourseold mention should be linked to but it cannot be used to determine whether a mention is discoursenew or notthe reason is simple all the training instances are generated from discourseold mentionshence to jointly learn discoursenew detection and coreference resolution we must train the ranker using instances generated from both discourseold and discoursenew mentionsspecifically when training the ranker we provide each active mention with the option to start a new cluster by creating an additional instance that contains features that solely describe the active mention and has the highest rank value among competing clusters if it is discoursenew and the lowest rank value otherwisethe main advantage of jointly learning the two tasks is that it allows the ranking model to evaluate all possible options for an active mention simultaneouslyafter training the resulting cluster ranker processes the mentions in a test text in a lefttoright mannerfor each active mention mk we create test instances for it by pairing it with each of its preceding clustersto allow for the possibility that mk is discoursenew we create an additional test instance that contains features that solely describe the active mention all these test instances are then presented to the rankerif the additional test instance is assigned the highest rank value by the ranker then mk is classified as discoursenew and will not be resolvedotherwise mk is linked to the cluster that has the highest rankas before all partial clusters preceding mk are formed incrementally based on the predictions of the ranker for the first k 1 mentionscorpuswe use the ace 2005 coreference corpus as released by the ldc which consists of the 599 training documents used in the official ace evaluation3 to ensure diversity the corpus was created by selecting documents from six different sources broadcast news broadcast conversations newswire webblog usenet and conversational telephone speech the number of documents belonging to each source is shown in table 2for evaluation we partition the 599 documents into a training set and a test set following a 8020 ratio ensuring that the two sets have the same proportion of documents from the six sourcesmention extractorwe evaluate each coreference model using both true mentions and system mentions to extract system mentions from a test text we trained a mention extractor on the training textsfollowing florian et al we recast mention extraction as a sequence labeling task where we assign to each token in a test text a label that indicates whether it begins a mention is inside a mention or is outside a mentionhence to learn the extractor we create one training instance for each token in a training text and derive its class value from the annotated dataeach instance represents wi the token under consideration and consists of 29 linguistic features many of which are modeled after the systems of bikel et al and florian et al as described belowlexical tokens in a window of 7 wi3 wi3capitalization determine whether wi isallcap isinitcap iscapperiod and isalllower morphological wis prefixes and suffixes of length one two three and fourgrammatical the partofspeech tag of wi obtained using the stanford loglinear pos tagger semantic the named entity tag of wi obtained using the stanford crfbased ne recognizer gazetteers eight dictionaries containing pronouns common words and words that are not names person names person titles and honorifics vehicle words location names company names and nouns extracted from wordnet that are hyponyms of person we employ crf5 a c implementation of conditional random fields for training the mention detector which achieves an fscore of 867 on the test setthese extracted mentions are to be used as system mentions in our coreference experimentsscoring programsto score the output of a coreference model we employ three scoring programs muc b3 and φ3ceaf there is a complication howeverwhen scoring a response partition against a key partition a scoring program needs to construct a mapping between the mentions in the response and those in the keyif the response is generated using true mentions then every mention in the response is mapped to some mention in the key and vice versa in other words there are no twinless mentions however this is not the case when system mentions are usedthe aforementioned complication does not arise from the construction of the mapping but from the fact that bagga and baldwin and luo do not specify how to apply b3 and ceaf to score partitions generated from system mentionswe propose a simple solution to this problem we remove all and only those twinless system mentions that are singletons before applying b3 and ceafthe reason is simple since the coreference resolver has successfully identified these mentions as singletons it should not be penalized and removing them allows us to avoid such penaltynote that we only remove twinless system mentions that are singletons this allows us to reward a resolver for successful identification of singleton mentions that have twins thus overcoming a major weakness of and common criticism against the muc scoreralso we retain twinless system mentions that are nonsingletons as the resolver should be penalized for identifying spurious coreference relationson the other hand we do not remove twinless mentions in the key partition as we want to ensure that the resolver makes the correct coreference decisions for themwe believe that our proposal addresses stoyanov et als problem of having very low precision when applying the ceaf scorer to score partitions of system mentionsthe mentionpair baselinewe train our first baseline the mentionpair coreference classifier using the svm learning algorithm as implemented in the svmlight package 6 results of this baseline using true mentions and system mentions shown in row 1 of tables 3 and 4 are reported in terms of recall precision and fscore provided by the three scoring programsas we can see this baseline achieves fscores of 543700 and 534625 for true mentions and system mentions respectivelythe entitymention baselinenext we train our second baseline the entitymention coreference classifier using the svm learnerresults of this baseline are shown in row 2 of tables 3 and 4for true mentions this baseline achieves an fscore of 548707in comparison to the mentionpair baseline fscore rises insignificantly according to all three scorers7 similar trends can be observed for system mentions where the fscores between the two models are statistically indistinguishable across the boardwhile the insignificant performance difference is somewhat surprising given the improved expressiveness of entitymention models over mentionpair models similar trends have been reported by luo et al the mentionranking baselineour third baseline is the mentionranking coreference model trained using the rankerlearning algorithm in svmlightto identify discoursenew mentions we employ two methodsin the first method we adopt a pipeline architecture where we train an svm classifier for discoursenew detection independently of the mention ranker on the training set using the 26 features described in section 33we then apply the resulting classifier to each test text to filter discoursenew mentions prior to coreference resolutionresults of the mention ranker are shown in row 3 of tables 3 and 4as we can see the ranker achieves fscores of 578712 and 541654 for true mentions and system mentions respectively yielding a significant improvement over the entitymention baseline in all but one case in the second method we perform discoursenew detection jointly with coreference resolution using the method described in section 42while we discussed this joint learning method in the context of cluster ranking it should be easy to see that the method is equally applicable to a mention rankerresults of the mention ranker using this joint architecture are shown in row 4 of tables 3 and 4as we can see the ranker achieves fscores of 616734 and 556671 for true mentions and system mentions respectivelyfor both types of mentions the improvements over the corresponding results for the entitymention baseline are significant and suggest that mention ranking is a precisionenhancing devicemoreover in comparison to the pipeline architecture in row 3 we see that fscore rises significantly by 2238 for true mentions and improves by a smaller margin of 0317 for system mentionsthese results demonstrate the benefits of joint modelingour clusterranking modelfinally we evaluate our clusterranking modelas in the mentionranking baseline we employ both the pipeline architecture and the joint architecture for discoursenew detectionresults are shown in rows 5 and 6 of tables 3 and 4 respectively for the two architectureswhen true mentions are used the pipeline architecture yields an fscore of 618 748 which represents a significant improvement over the mention ranker adopting the pipeline architecturewith the joint architecture the cluster ranker achieves an fscore of 633760this also represents a significant improvement over the mention ranker adopting the joint architecture the best of the baselines and suggests that cluster ranking is abetter precisionenhancing model than mention rankingmoreover comparing the results in these two rows reveals the superiority of the joint architecture over the pipeline architecture particularly in terms of its ability to enhance system precisionsimilar performance trends can be observed when system mentions are usedwe have presented a clusterranking approach that recasts the mention resolution process as the problem of finding the best preceding cluster to link an active mention tocrucially our approach combines the strengths of entitymention models and mentionranking modelsexperimental results on the ace 2005 corpus show that jointly learning coreference resolution and discoursenew detection allows the cluster ranker to achieve better performance than adopting a pipeline coreference architecture and our cluster ranker significantly outperforms the mention ranker the best of the three baseline coreference models under both the pipeline architecture and the joint architectureoverall we believe that our clusterranking approach advances the stateoftheart in coreference resolution both theoretically and empiricallywe thank the three anonymous reviewers for their invaluable comments on the paperthis work was supported in part by nsf grant iis0812261
D09-1101
supervised models for coreference resolutiontraditional learningbased coreference resolvers operate by training a mentionpair classifier for determining whether two mentions are coreferent or nottwo independent lines of recent research have attempted to improve these mentionpair classifiers one by learning a mentionranking model to rank preceding mentions for a given anaphor and the other by training an entitymention classifier to determine whether a preceding cluster is coreferent with a given mentionwe propose a clusterranking approach to coreference resolution that combines the strengths of mention rankers and entitymention modelswe additionally show how our clusterranking framework naturally allows discoursenew entity detection to be learned jointly with coreference resolutionexperimental results on the ace data sets demonstrate its superior performance to competing approachesin each query we include a nullcluster instance to allow joint learning of discoursenew detectionwe show that the cr model is stronger than the mp modelour cluster ranking model proceeds in a lefttoright fashion and adds the current discourse old mention to the highest scoring preceding cluster
simple coreference resolution with rich syntactic and semantic features coreference systems are driven by syntactic semantic and discourse constraints we present a simple approach which completely modularizes these three aspects in contrast to much current work which focuses on learning and on the discourse component our system is deterministic and is driven entirely by syntactic and semantic compatibility as learned from a large unlabeled corpus despite its simplicity and discourse naivete our system substantially outperforms all unsupervised systems and most supervised ones primary contributions include the presentation of a simpletoreproduce highperforming baseline and the demonstration that most remaining errors can be attributed to syntactic and semantic factors external to the coreference phenomenon the resolution of entity reference is influenced by a variety of constraintssyntactic constraints like the binding theory the iwithini filter and appositive constructions restrict reference by configurationsemantic constraints like selectional compatibility and subsumption rule out many possible referentsfinally discourse phenomena such as salience and centering theory are assumed to heavily influence reference preferencesas these varied factors have given rise to a multitude of weak features recent work has focused on how best to learn to combine them using models over reference structures in this work we break from the standard viewinstead we consider a vastly more modular system in which coreference is predicted from a deterministic function of a few rich featuresin particular we assume a threestep processfirst a selfcontained syntactic module carefully represents syntactic structures using an augmented parser and extracts syntactic paths from mentions to potential antecedentssome of these paths can be ruled in or out by deterministic but conservative syntactic constraintsimportantly the bulk of the work in the syntactic module is in making sure the parses are correctly constructed and used and this modules most important training data is a treebanksecond a selfcontained semantic module evaluates the semantic compatibility of headwords and individual namesthese decisions are made from compatibility lists extracted from unlabeled data sources such as newswire and web datafinally of the antecedents which remain after rich syntactic and semantic filtering reference is chosen to minimize tree distancethis procedure is trivial where most systems are rich and so does not need any supervised coreference datahowever it is rich in important ways which we argue are marginalized in recent coreference workinterestingly error analysis from our final system shows that its failures are far more often due to syntactic failures and semantic failures than failure to model discourse phenomena or appropriately weigh conflicting evidenceone contribution of this paper is the exploration of strong modularity including the result that our system beats all unsupervised systems and approaches the state of the art in supervised onesanother contribution is the error analysis result that even with substantial syntactic and semantic richness the path to greatest improvement appears to be to further improve the syntactic and semantic modulesfinally we offer our approach as a very strong yet easy to implement baselinewe make no claim that learning to reconcile disparate features in a joint model offers no benefit only that it must not be pursued to the exclusion of rich nonreference analysisin coreference resolution we are given a document which consists of a set of mentions each mention is a phrase in the document and we are asked to cluster mentions according to the underlying referent entitythere are three basic mention types proper nominal and pronominal 1 for comparison to previous work we evaluate in the setting where mention boundaries are given at test time however our system can easily annotate reference on all noun phrase nodes in a parse tree in this work we use the following data sets development we will present evaluations on multiple coreference resolution metrics as no single one is clearly superiorin this section we develop our system and report developmental results on ace2004rothdev we report pairwise f1 figures here but report on many more evaluation metrics in section 4at a high level our system resembles a pairwise coreference model for each mention mi we select either a singlebest antecedent amongst the previous mentions m1 mi1 or the null mention to indicate the underlying entity has not yet been evokedmentions are linearly ordered according to the position of the mention head with ties being broken by the larger node coming firstwhile much research has explored how to reconcile pairwise decisions to form coherent clusters we simply take the transitive closure of our pairwise decision and bengston and roth which can and does because system errorsin contrast to most recent research our pairwise decisions are not made with a learned model which outputs a probability or confidence but instead for each mention mi we select an antecedent amongst m1 mi_1 or the null mention as follows initially there is no syntactic constraint the antecedent compatibility filter allows proper and nominal mentions to corefer only with mentions that have the same head and pronouns have no compatibility constraints mention heads are determined by parsing the given mention span with the stanford parser and using the collins head rules poon and domingos showed that using syntactic heads strongly outperformed a simple rightmost headword rulethe mention type is determined by the head pos tag proper if the head tag is nnp or nnps pronoun if the head tag is prp prp wp or wp and nominal otherwisefor the selection phase we order mentions m1 mi_1 according to the position of the head word and select the closest mention that remains after constraint and filtering are appliedthis choice reflects the intuition of grosz et al that speakers only use pronominal mentions when there are not intervening compatible mentionsthis system yields a rather low 489 pairwise f1 there are many primarily recall errors made choosing antecedents for all mention types which we will address by adding syntactic and semantic constraintsin this section we enrich the syntactic representation and information in our system to improve resultswe first focus on fixing the pronoun antecedent choicesa common error arose from the use of mention head distance as a poor proxy for discourse saliencefor instance consider the example in figure 1 the mention america is closest to its in flat mention distance but syntactically nintendo ofamerica holds a more prominent syntactic position relative to the pronoun which as hobbs argues is key to discourse saliencemapping mentions to parse nodes in order to use the syntactic position of mentions to determine anaphoricity we must associate each mention in the document with a parse tree nodewe parse all document sentences with the stanford parser and then for each evaluation mention we find the largestspan np which has the previously determined mention head as its head5 often this results in a different typically larger mention span than annotated in the datanow that each mention is situated in a parse tree we utilize the length of the shortest tree path between mentions as our notion of distancein by agreement constraints the pronoun them is closest to the site mention but has an incompatible number feature with itthe closest compatible mention is the israelis which is correct particular this fixes examples such as those in figure 1 where the true antecedent has many embedded mentions between itself and the pronounthis change by itself yields 517 pairwise f1 which is small overall but reduces pairwise pronoun antecedent selection error from 513 to 425we now refine our compatibility filtering to incorporate simple agreement constraints between coreferent mentionssince we currently allow proper and nominal mentions to corefer only with matching head mentions agreement is only a concern for pronounstraditional linguistic theory stipulates that coreferent mentions must agree in number person gender and entity type here we implement person number and entity type agreement6 a number feature is assigned to each mention deterministically based on the head and its pos tagfor entity type we use ner labelsideally we would like to have information about the entity type of each referential np however this information is not easily obtainableinstead we opt to utilize the stanford ner tagger over the sentences in a document and annotate each np with the ner label assigned to that mention headfor each mention when its np is assigned an ner label we allow it to only be compatible with that ner label7 for pronouns we deterministically assign a set of compatible ner values since the ner tagger typically does not label nonproper np heads we have no ner compatibility information for nominalswe incorporate agreement constraints by filtering the set of possible antecedents to those which have compatible number and ner types with the target mentionthis yields 534 pairwise f1 and reduces pronoun antecedent errors to 425 from 344an example of the type of error fixed by these agreement constraints is given by figure 2our system has so far focused only on improving pronoun anaphora resolutionhowever a plurality of the errors made by our system are amongst nonpronominal mentions8 we take the approach that in order to align a nonpronominal mention to an antecedent without an identical head we require evidence that the mentions are compatiblejudging compatibility of mentions generally requires semantic knowledge to which we return laterhowever some syntactic configurations guarantee coreferencethe one exploited most in coreference work is the appositive constructionhere we represent apposition as a syntactic feature of an np indicating that it is coreferent with its parent np we deterministically mark a node as npappos when it is the third child in of a parent np whose expansion begins with and there is not a conjunction in the expansion role appositives during development we discovered many errors which involved a variant of appositives which we call role appositives where an np modifying the head np describes the role of that entity there are several challenges to correctly labeling these role nps as being appositivesfirst the nps produced by treebank parsers are flat and do not have the required internal structure while fully solving this problem is difficult we can heuristically fix many instances of the problem by placing an np around maximum length sequences of nnp tags or nn tags within an np note that this will fail for many constructions such as yous president barack obama which is analyzed as a flat sequence of proper nounsonce this internal np structure has been added whether the np immediately to the left of the head np is an appositive depends on the entity typefor instance rabbi ashi is an apposition but iranian army is notagain a full solution would require its own model here we mark as appositions any nps immediately to the left of a head child np where the head child np is identified as a person by the ner tagger9 we incorporate np appositive annotation as a constraint during filteringany mention which corresponds to an appositive node has its set of possible antecedents limited to its parentalong with the appositive constraint we implement the iwithini constraint that any nonappositive np cannot be be coreferent with its parent this constraint is then propagated to any node its parent is forced to agree withthe order in which these constraints are applied is important as illustrated by the example in figure 4 first the list of possible antecedents for the appositive np is constrained to only its parentnow that all appositives have been constrained we apply the iwithini constraint which prevents its from having the np headed by brand in the set of possible antecedents and by propagation also removes the np headed by gitanothis leaves the np walmart as the closest compatible mentionadding these syntactic constraints to our system yields 554 f1 a fairly substantial improvement but many recall errors remain between mentions with differing headsresolving such cases will require external semantic information which we will automatically acquire predicate nominatives another syntactic constraint exploited in poon and domingos is the predicate nominative construction where the object of a copular verb is constrained to corefer with its subject while much less frequent than appositive configurations to be role appositive but we do not do so here opment set predicate nominatives are another highly reliable coreference pattern which we will leverage in section 32 to mine semantic knowledgeas with appositives we annotate object predicatenominative nps and constrain coreference as beforethis yields a minor improvement to 555 f1while appositives and related syntactic constructions can resolve some cases of nonpronominal reference most cases require semantic knowledge about the various entities as well as the verbs used in conjunction with those entities to disambiguate references however given a semantically compatible mention head pair say aol and company one might expect to observe a reliable appositive or predicativenominative construction involving these mentions somewhere in a large corpusin fact the wikipedia page for aol10 has a predicatenominative construction which supports the compatibility of this head pair aol llc is an american global internet services and media company operated by time warnerin order to harvest compatible head pairs we utilize our blipp and wiki data sets and for each noun and pronoun we assign a maximal np mention node for each nominal head as in section 311 we then annotate appositive and predicatenominative nps as in section 313for any np which is annotated as an appositive or predicatenominative we extract the head pair of that node and its constrained antecedentthe resulting set of compatible head words while large covers a little more than half of the examples given in table 1the problem is that these highlyreliable syntactic configurations are too sparse and cannot capture all the entity information presentfor instance the first sentence of wikipedia abstract for al gore is albert arnold al gore jr is an american environmental activist who served as the 45th vice president of the united states from 1993 to 2001 under president bill clintonthe required lexical pattern x who served as y is a general appositivelike pattern that almost surely indicates coreferencerather than opt to manually create a set of these coreference patterns as in hearst we instead opt to automatically extract these patterns from large corpora as in snow et al and phillips and riloff we take a simple bootstrapping technique given a set of mention pairs extracted from appositives and predicatenominative configurations we extract counts over tree fragments between nodes which have occurred in this set of head pairs the tree fragments are formed by annotating the internal nodes in the tree path with the head word and pos along with the subcategorizationwe limit the paths extracted in this way in several ways paths are only allowed to go between adjacent sentences and have a length of at most 10we then filter the set of paths to those which occur more than a hundred times and with at least 10 distinct seed head word pairsthe vast majority of the extracted fragments are variants of traditional appositives and predicatenominatives with some of the structure of the nps specifiedhowever there are some tree fragments which correspond to the novel coreference patterns of parenthetical alias as well as conjunctions of roles in npswe apply our extracted tree fragments to our blipp and wiki data sets and extract a set of compatible word pairs which match these fragments these words pairs will be used to relax the semantic compatibility filter mentions are compatible with prior mentions with the same head or with a semantically compatible head wordthis yields 585 pairwise f1 as well as similar improvements across other metricsby and large the word pairs extracted in this way are correct there are however wordpairs which introduce errorsin particular citystate constructions appears to be an appositive and incorrectly allows our system to have angeles as an antecedent for californiaanother common error is that the symbol is made compatible with a wide variety of common nouns in the financial domainwe present formal experimental results here we first evaluate our model on the ace2004culottatest dataset used in the stateoftheart systems from culotta et al and bengston and roth both of these systems were supervised systems discriminatively trained to maximize b3 and used features from many different structured resources including wordnet as well as domainspecific features our best b3 result of 790 is broadly in the range of these resultswe should note that in our work we use neither the gold mention types nor do we use the gold ner tags which bengston and roth doesacross metrics the syntactic constraints and semantic compatibility components contribute most to the overall final resulton the muc6test dataset our system outpersion made by the systemeach row is a mention type and the column the predicted mention type antecedentthe majority of errors are made in the nominal category forms both poon and domingos and finkel and manning on all comparable measures11 similarly on the ace2004nwire dataset we also outperform the stateoftheart unsupervised system of poon and domingos overall we conclude that our system outperforms stateoftheart unsupervised systems12 and is in the range of the stateofthe art systems of culotta et al and bengston and roth there are several general trends to the errors made by our systemtable 3 shows the number of pairwise errors made on muc6test dataset by mention type note these errors are not equally weighted in the final evaluations because of the transitive closure taken at the endthe most errors are made on nominal mentions with pronouns coming in a distant secondin particular we most frequently say a nominal is null when it has an antecedent this is typically due to not having the necessary semantic knowledge to link a nominal to a prior expressionin order to get a more thorough view of the because of pairwise errors we examined 20 random errors made in aligning each mention type to an antecedentwe categorized the errors as follows we incorrectly aligned a pronoun to a mention with which it is not semantically compatible mentions with the same head are always compatibleincludes modifier and specificity errors such as allowing lebanon and southern lebanon to coreferthis also includes errors of definiteness in nominals typically these errors involve a combination of missing syntactic and semantic informationthe result of this error analysis is given in table 4 note that a single error may be attributed to more than one becausedespite our efforts in section 3 to add syntactic and semantic information to our system the largest source of error is still a combination of missing semantic information or annotated syntactic structure rather than the lack of discourse or salience modelingour error analysis suggests that in order to improve the stateoftheart in coreference resolution future research should consider richer syntactic and semantic information than typically used in current systemsour approach is not intended as an argument against the more complex discoursefocused approaches that typify recent workinstead we note that rich syntactic and semantic processing vastly reduces the need to rely on discourse effects or evidence reconciliation for reference resolutionindeed we suspect that further improving the syntactic and semantic modules in our system may produce greater error reductions than any other route forwardof course a system which is rich in all axes will find some advantage over any simplified approachnonetheless our coreference system despite being relatively simple and having no tunable parameters or complexity beyond the nonreference complexity of its component modules manages to outperform stateoftheart unsupervised coreference resolution and be broadly comparable to stateoftheart supervised systems
D09-1120
simple coreference resolution with rich syntactic and semantic featurescoreference systems are driven by syntactic semantic and discourse constraintswe present a simple approach which completely modularizes these three aspectsin contrast to much current work which focuses on learning and on the discourse component our system is deterministic and is driven entirely by syntactic and semantic compatibility as learned from a large unlabeled corpusdespite its simplicity and discourse naivete our system substantially outperforms all unsupervised systems and most supervised onesprimary contributions include the presentation of a simpletoreproduce highperforming baseline and the demonstration that most remaining errors can be attributed to syntactic and semantic factors external to the coreference phenomenon we show that coreference errors in stateofthe art systems are frequently due to poor models of semantic compatibilityin our synconstr setting each referring mention is coreferent with any past mention with the same head or in a deterministic syntactic configuration when searching for an antecedent for mk its candidate antecedents are visited in an order determined by their positions in the associated parse tree
bilinguallyconstrained shiftreduce parsing jointly parsing two languages has been shown to improve accuracies on either or both sides however its search space is much bigger than the monolingual case forcing existing approaches to employ complicated modeling and crude approximations here we propose a much simpler monowhere a sourcelanguage parser learns to exploit reorderings as adobservation but to build the targetside tree as well we show specifically how to enhance a shiftreduce dependency parser with alignment features to resolve shiftreduce conflicts experiments on the bilingual portion of chinese treebank show that with just 3 bilingual features we can improve parsing accuracies by 06 for both english and chinese over a stateoftheart with negligible efficiency overhead thus much faster than biparsing ambiguity resolution is a central task in natural language processinginterestingly not all languages are ambiguous in the same wayfor example prepositional phrase attachment is ambiguous in english but is strictly unambiguous in chinese and largely unambiguous japanese see two languages for better disambiguation which has been applied not only to this ppattachment problem but also to the more fundamental problem of syntactic parsing which subsumes the former as a subproblemfor example smith and smith and burkett and klein show that joint parsing on a bitext improves accuracies on either or both sides by leveraging bilingual constraints which is very promising for syntaxbased machine translation which requires parse trees for rule extraction however the search space of joint parsing is inevitably much bigger than the monolingual case forcing existing approaches to employ complicated modeling and crude approximationsjoint parsing with a simplest synchronous contextfree grammar is o as opposed to the monolingual o timeto make things worse languages are nonisomorphic ie there is no 1to1 mapping between tree nodes thus in practice one has to use more expressive formalisms such as synchronous treesubstitution grammars in fact rather than joint parsing per se burkett and klein resort to separate monolingual parsing and bilingual reranking over k2 tree pairs which covers a tiny fraction of the whole space we instead propose a much simpler alternative bilinguallyconstrained monolingual parsing where a sourcelanguage parser is extended to exploit the reorderings between languages as additional observation but not bothering to build a tree for the target side simultaneouslyto illustrate the idea suppose we are parsing the sentence both are possible but with a chinese translation the choice becomes clear because a chinese pp always immediately precedes the phrase it is modifying thus making ppattachment strictly unambiguous2 we can thus use chinese to help parse english ie whenever we have a ppattachment ambiguity we will consult the chinese translation and based on the alignment information decide where to attach the english ppon the other hand english can help chinese parsing as well for example in deciding the scope of relative clauses which is unambiguous in english but ambiguous in chinesethis method is much simpler than joint parsing because it remains monolingual in the backbone with alignment information merely as soft evidence rather than hard constraints since automatic word alignment is far from perfectit is thus straightforward to implement within a monolingual parsing algorithmin this work we choose shiftreduce dependency parsing for its simplicity and efficiencyspecifically we make the following contributionsthe basic idea of classical shiftreduce parsing from compiler theory is to perform a lefttoright scan of the input sentence and at each step choose one of the two actions either shift the current word onto the stack or reduce the top two items on the stack replacing them with their combinationthis idea has been applied to constituency parsing for example in sagae and lavie and we describe below a simple variant for dependency parsing similar to yamada and matsumoto and the arcstandard version of nivre basically we just need to split the reduce action into two symmetric actions reducel and reducer depending on which one of the two note that shift requires nonempty queue while reduce requires at least two elements on the stack items becomes the head after reductionmore formally we describe a parser configuration by a tuple where s is the stack q is the queue of remaining words of the input and a is the set of dependency arcs accumulated so far3 at each step we can choose one of the three actions these actions are summarized in table 1the initial configuration is always with empty stack and no arcs and the final configuration is where wj is recognized as the root of the whole sentence and a encodes a spanning tree rooted at wjfor a sentence of n words there are exactly 2n 1 actions n shifts and n 1 reductions since every word must be pushed onto stack once and every word except the root will eventually be popped in a reductionthe time complexity as other shiftreduce instances is clearly ofigure 2 shows the trace of this paradigm on the example sentencefor the first two configurations while gray words have been popped from stackafter step the process can take either or which correspond to the two attachments and in figure 1 respectively and only shift is possible since there are not enough items on the stack for reductionat step we perform a reducel making word i a modifier of saw after that the stack contains a single word and we have to shift the next word bill now we face a shiftreduce conflict we can either combine saw and bill in a reducer action or shift bill we will use features extracted from the configuration to resolve the conflictfor example one such feature could be a bigram st o st1 capturing how likely these two words are combined see table 2 for the complete list of feature templates we use in this baseline parserwe argue that this kind of shiftreduce conflicts are the major source of parsing errors since the other type of conflict reducereduce conflict is relatively easier to resolve given the partofspeech informationfor example between a noun and an adjective the former is much more likely to be the head shiftreduce resolution however is more nonlocal and often involves a triple for example for a typical ppattachmenton the other hand if we indeed make a wrong decision a reducereduce mistake just flips the head and the modifier and often has a more local effect on the shape of the tree whereas a shiftreduce mistake always leads stack wi and wi1 denote the current and next words on the queuet denotes the pos tag of a given word and lc and rc represent the leftmost and rightmost childsymbol o denotes feature conjunctioneach of these templates is further conjoined with the 3 actions shift reducel and reducer to vastly incompatible tree shapes with crossing brackets we will see in section 53 that this is indeed the case in practice thus suggesting us to focus on shiftreduce resolution which we will return to with the help of bilingual constraints in section 3the three action system was originally described by yamada and matsumoto and then appeared as arcstandard in nivre but was argued against in comparison to the fouraction arceager variantmost subsequent works on shiftreduce or transitionbased dependency parsing followed arceager which now becomes the dominant stylebut we argue that arcstandard is preferable because and disjoint whereas the semantics of 4 actions are not completely disjointfor example their left action assumes an implicit reduce of the left item and their right action assumes an implicit shiftfurthermore these two actions have nontrivial preconditions which also causes the next problem we argue that this is rather complicated to implement3 the arcstandard scan always succeeds since at the end we can always reduce with empty queue whereas the arceager style sometimes goes into deadends where no action can perform this becomes parsing failures in practice leaving more than one fragments on stackas we will see in section 51 this simpler arcstandard system performs equally well with a stateoftheart arceager system on standard english treebank parsing we argue that all things being equal this simpler paradigm should be preferred in practice4 we also enhance deterministic shiftreduce parsing with beam search similar to zhang and clark where k configurations develop in parallelpseudocode 1 illustrates the algorithm where we keep an agenda v of the current active configurations and at each step try to extend them by applying one of the three actionswe then dump the best k new configurations from the buffer back pseudocode 1 beamsearch shiftreduce parsing into the agenda for the next stepthe complexity of this algorithm is o which subsumes the determinstic mode as a special case to train the parser we need an oracle or goldstandard action sequence for goldstandard dependency treesthis oracle turns out to be nonunique for the threeaction system because left dependents of a head can be reduced either before or after all right dependents are reducedfor example in figure 2 i is a left dependent of saw and can in principle wait until bill and with are reduced and then finally combine with sawwe choose to use the heuristic of shortest stack that always prefers reducel over shift which has the effect that all left dependents are first recognized insideout followed by all right dependents also insideout which coincides with the headdriven constituency parsing model of collins we use the popular online learning algorithm of structured perceptron with parameter averaging following collins and roark we also use the earlyupdate strategy where an update happens whenever the goldstandard actionsequence falls off the beam with the rest of the sequence neglectedas a special case for the deterministic mode updates always cooccur with the first mistake madethe intuition behind this strategy is that future mistakes are often caused by previous ones so with the parser on the wrong track future actions become irrelevant for learningsee section 53 for more discussions and cr at step in fig2 bold words are currently on stack while gray ones have been poppedhere the stack tops are st bill st1 saw and the queue head is wi with underlined texts mark the source and target spans being considered and wavy underlines mark the allowed spans red bold alignment links violate contiguity constraintsas suggested in section 22 shiftreduce conflicts are the central problem we need to address hereour intuition is whenever we face a decision whether to combine the stack tops st1 and st or to shift the current word wi we will consult the other language where the wordalignment information would hopefully provide a preference as in the running example of ppattachment we now develop this idea into bilingual contiguity featuresinformally if the correct decision is a reduction then it is likely that the corresponding words of st1 and st on the targetside should also form a contiguous spanfor example in figure 3 the source span of a reduction is saw bill which maps onto kandao bier on the chinese sidethis target span is contiguous because no word within this span is aligned to a source word outside of the source spanin this case we say feature c which encourages reducehowever in figure 3 the source span is still saw bill but this time maps onto a much longer span on the chinese sidethis target span is discontiguous since the chinese words na and wangyuanjin are alinged to english with and telescope both of which fall outside of the source spanin this case we say feature c which discourages reduce similarly we can develop another feature cr for the shift actionin figure 3 when considering shifting with the source span becomes bill with which maps to na bier on the chinese sidethis target span looks like discontiguous in the above definition with wangyuanjin aligned to telescope but we tolerate this case for the following reasonsthere is a crucial difference between shift and reduce in a shift we do not know yet the subtree spans the only thing we are sure of in a shift action is that st and wi will be combined before st1 and st are combined so we can tolerate any target word aligned to source word still in the queue but do not allow any target word aligned to an already recognized source wordthis explains the notational difference between cr and c where subscript r means right contiguityas a final example in figure 3 chinese word kandao aligns to saw which is already recognized and this violates the right contiguityso cr suggesting that shift is probably wrongto be more precise table 3 shows the formal definitions of the two featureswe basically m is maps a source span to the target language and m1 is the reverse operation mapping back to the source language map a source span sp to its target span m and check whether its reverse image back onto the source language m1 falls inside the allowed span apfor cr the allowed span extends to the right end of the sentence5 to conclude so far we have got two alignmentbased features c correlating with reduce and cr correlating with shiftin fact the conjunction of these two features is another feature with even stronger discrimination powerif is a very strong signal for shiftso in total we got three bilingual feature which in practice amounts to 24 instances we show in section 53 that these features do correlate with the correct shiftreduce actions in practicethe naive implemention of bilingual feature computation would be of o complexity in the worse case because when combining the largest spans one has to scan over the whole sentencewe envision the use of a clever datastructure would reduce the complexity but leave this to future work as the experiments show that 5our definition implies that we only consider faithful spans to be contiguous also note that source spans include all dependents of st and st1 the parser is only marginally slower with the new bilingual featuresthis is because the extra work with just 3 bilingual features is not the bottleneck in practice since the extraction of the vast amount of other features in table 2 dominates the computationbesides those cited in section 1 there are some other related work on using bilingual constraints for grammar induction for example hwa et al use simple heuristics to project english trees to spanish and chinese but get discouraging accuracy results learned from those projected treesfollowing this idea ganchev et al and smith and eisner use constrained them and parser adaptation techniques respectively to perform more principled projection and both achieve encouraging resultsour work by constrast never uses bilingual tree pairs not tree projections and only uses word alignment alone to enhance a monolingual grammar which learns to prefer targetside contiguitywe implement our baseline monolingual parser based on the shiftreduce algorithm in section 2 with feature templates from table 2we evaluate its performance on the standard penn english treebank dependency parsing task ie train on sections 0221 and test on section 23 with automatically assigned pos tags using a tagger similar to collins and using the headrules of yamada and matsumoto for conversion into dependency treeswe use section 22 as dev set to determine the optimal number of iterations in perceptron trainingtable 4 compares our baseline against the stateoftheart graphbased and transitionbased approaches and confirms that our system performs at the same level with those stateoftheart and runs extremely fast in the deterministic mode and still quite fast in the beamsearch mode the bilingual data we use is the translated portion of the penn chinese treebank corresponding to articles 1325 of ptb which have english translations with goldstandard parse trees table 5 shows the split of this data into training development and test subsets according to burkett and klein note that not all sentence pairs could be included since many of them are not onetoone aligned at the sentence levelour wordalignments are generated from the hmm aligner of liang et al trained on approximately 17m sentence pairs this aligner outputs soft alignments ie posterior probabilities for each sourcetarget word pairwe use a pruning threshold of 0535 to remove lowconfidence alignment links6 and use the remaining links as hard alignments we leave the use of alignment probabilities to future workfor simplicity reasons in the following experiments we always supply goldstandard pos tags as part of the input to the parserbefore evaluating our bilingual approach we need to verify empirically the two assumptions we made about the parser in sections 2 and 3 baseline model on english dev setsh re means should shift but reducedshiftreduce conflicts overwhelmingly dominatehypothesis 1 is verified in table 6 where we count all the first mistakes the baseline parser makes on the english dev set in shiftreduce parsing further mistakes are often caused by previous ones so only the first mistake in each sentence is easily identifiable7 this is also the argument for early update in applying perceptron learning to these incremental parsing algorithms among the 197 first mistakes the vast majority 190 of them are shiftreduce errors and only 7 are due to reducereduce conflicts8 these statistics confirm our intuition that shiftreduce decisions are much harder to make during parsing and contribute to the overwhelming majority of errors which is studied in the next hypothesishypothesis 2 is verified in table 7we take the goldstandard shiftreduce sequence on the english dev set and classify them into the four categories based on bilingual contiguity features c ie whether the top 2 spans on stack is contiguous and cr ie whether the stack top is contiguous with the current word wiaccording to discussions in section 3 when is contiguous and is not it is a clear signal for reduce rather than shift and is strongly supported by the data and while when is contiguous and is not it should suggest shift rather than reduce and is mildly supported by the data when and are both contiguous or both discontiguous it should be considered a neutral signal and is also consistent with the data so to conclude this bilingual hypothesis is empirically justifiedon the other hand we would like to note that these correlations are done with automatic word alignments which can be quite noisywe suspect that using manual alignments would result in a better correlation though for the main parsing results we can only afford automatic alignments in order for our approach to be widely applicable to any bitextwe incorporate the three bilingual features into the baseline parser retrain it and test its performance on the english dev set with varying beam sizetable 8 shows that bilingual constraints help more with larger beams from almost no improvement with the deterministic mode to 05 better with the largest beam this could be explained by the fact that beamsearch is more robust than the deterministic mode where in the latter if our bilingual features misled the parser into a mistake there is no chance of getting back while in the former multiple configurations are being pursued in parallelin terms of speed both parsers run proportionally slower with larger beams as the time complexity is linear to the beamsizecomputing the bilingual features further slows it down but only fractionally so which is appealing in practiceby contrast burkett and klein reported their approach of monolingual kbest parsing followed by bilingual k2best reranking to be 38 times slower than monolingual parsingour final results on the test set are summarized in table 9on both english and chinese the addition of bilingual features improves dependency arc accuracies by 06 which is mildly significant using the ztest of collins et al we also compare our results against the berkeley parser as a reference system with the exact same setting and the resulting trees are converted into dependency via the same headruleswe use 5 iterations of splitmerge grammar induction as the 6th iteration overfits the small training setthe result is worse than our baseline on english but better than our bilingual parser on chinesethe discrepancy between english and chinese is probably due to the fact that our baseline feature templates are engineered on english not chinesewe have presented a novel parsing paradigm bilinguallyconstrained monolingual parsing which is much simpler than joint parsing yet still yields mild improvements in parsing accuracy in our preliminary experimentsspecifically we showed a simple method of incorporating alignment features as soft evidence on top of a stateoftheart shiftreduce dependency parser which helped better resolve shiftreduce conflicts with fractional efficiency overheadthe fact that we managed to do this with only three alignment features is on one hand encouraging but on the other hand leaving the bilingual feature space largely unexploredso we will engineer more such features especially with lexicalization and soft alignments and study the impact of alignment quality on parsing improvementfrom a linguistics point of view we would like to see how linguistics distance affects this approach eg we suspect englishfrench would not help each other as much as englishchinese do and it would be very interesting to see what types of syntactic ambiguities can be resolved across different language pairsfurthermore we believe this bilingualmonolingual approach can easily transfer to shiftreduce constituency parsing we thank the anonymous reviewers for pointing to us references about arcstandardwe also thank aravind joshi and mitch marcus for insights on pp attachment joakim nivre for discussions on arceager yang liu for suggestion to look at manual alignments and david a smith for sending us his paperthe second and third authors were supported by national natural science foundation of china contracts 60603095 and 60736014 and 863 state key project no2006aa010108
D09-1127
bilinguallyconstrained shiftreduce parsingjointly parsing two languages has been shown to improve accuracies on either or both sideshowever its search space is much bigger than the monolingual case forcing existing approaches to employ complicated modeling and crude approximationshere we propose a much simpler alternative bilinguallyconstrained monolingual parsing where a sourcelanguage parser learns to exploit reorderings as additional observation but not bothering to build the targetside tree as wellwe show specifically how to enhance a shiftreduce dependency parser with alignment features to resolve shiftreduce conflictsexperiments on the bilingual portion of chinese treebank show that with just 3 bilingual features we can improve parsing accuracies by 06 for both english and chinese over a stateoftheart baseline with negligible efficiency overhead thus much faster than biparsingwe keep the probabilities of a natural rule unchanged and set those of a virtual rule to 1we improve english prepositional phrase attachment using features from an unparsed chinese sentence
phrase dependency parsing for opinion mining in this paper we present a novel approach for mining opinions from product reviews where it converts opinion mining task to identify product features expressions of opinions and relations between them by taking advantage of the observation that a lot of product features are phrases a concept of phrase dependency parsing is introduced which extends traditional dependency parsing to phrase level this concept is then implemented for extracting relations between product features and expressions of opinions experimental evaluations show that the mining task can benefit from phrase dependency parsing as millions of users contribute rich information to the internet everyday an enormous number of product reviews are freely written in blog pages web forums and other consumergenerated mediums this vast richness of content becomes increasingly important information source for collecting and tracking customer opinionsretrieving this information and analyzing this content are impossible tasks if they were to be manually donehowever advances in machine learning and natural language processing present us with a unique opportunity to automate the decoding of consumers opinions from online reviewsprevious works on mining opinions can be divided into two directions sentiment classification and sentiment related information extractionthe former is a task of identifying positive and negative sentiments from a text which can be a passage a sentence a phrase and even a word the latter focuses on extracting the elements composing a sentiment textthe elements include source of opinions who expresses an opinion target of opinions which is a receptor of an opinion opinion expression which delivers an opinion some researchers refer this information extraction task as opinion extraction or opinion miningcomparing with the former one opinion mining usually produces richer informationin this paper we define an opinion unit as a triple consisting of a product feature an expression of opinion and an emotional attitudewe use this definition as the basis for our opinion mining tasksince a product review may refer more than one product feature and express different opinions on each of them the relation extraction is an important subtask of opinion miningconsider the following sentences and its size cannot be beatthe phrases underlined are the product features marked with square brackets are opinion expressionsproduct features and opinion expressions with identical superscript compose a relationfor the first sentence an opinion relation exists between the canon sd500 and recommend but not between picture and recommendthe example shows that more than one relation may appear in a sentence and the correct relations are not simple cartesian product of opinion expressions and product featuressimple inspection of the data reveals that product features usually contain more than one word such as lcd screen image color canon powershot sd500 and so onan incomplete product feature will confuse the successive analysisfor example in passage image color is disappointed the negative sentiment becomes obscure if only image or color is picked outsince a product feature could not be represented by a single word dependency parsing might not be the best approach here unfortunately which provides dependency relations only between wordsprevious works on relation extraction usually use the head word to represent the whole phrase and extract features from the word level dependency treethis solution is problematic because the information provided by the phrase itself can not be used by this kind of methodsand experimental results show that relation extraction task can benefit from dependencies within a phraseto solve this issue we introduce the concept of phrase dependency parsing and propose an approach to construct itphrase dependency parsing segments an input sentence into phrases and links segments with directed arcsthe parsing focuses on the phrases and the relations between them rather than on the single words inside each phrasebecause phrase dependency parsing naturally divides the dependencies into local and global a novel tree kernel method has also been proposedthe remaining parts of this paper are organized as follows in section 2 we discuss our phrase dependency parsing and our approachin section 3 experiments are given to show the improvementsin section 4 we present related work and section 5 concludes the paperfig1 gives the architecture overview for our approach which performs the opinion mining task in three main steps constructing phrase dependency tree from results of chunking and dependency parsing extracting candidate product features and candidate opinion expressions extracting relations between product features and opinion expressionsdependency grammar is a kind of syntactic theories presented by lucien tesnierein dependency grammar structure is determined by the relation between a head and its dependentsin general the dependent is a modifier or complement the head plays a more important role in determining the behaviors of the pairtherefore criteria of how to establish dependency relations and how to distinguish the head and dependent in such relations is central problem for dependency grammarfig2 shows the dependency representation of an example sentencethe root of the sentence is enjoyedthere are seven pairs of dependency relationships depicted by seven arcs from heads to dependentscurrently the mainstream of dependency parsing is conducted on lexical elements relations are built between single wordsa major information loss of this word level dependency tree compared with constituent tree is that it does not explicitly provide local structures and syntactic categories of phrases on the other hand dependency tree provides connections between distant words which are useful in extracting long distance relationstherefore compromising between the two we extend the dependency tree node with phrasesthat implies a noun phrase cannon sd500 powershot can be a dependent that modifies a verb phrase head really enjoy using with relation type dobjthe feasibility behind is that a phrase is a syntactic unit regardless of the length or syntactic category and it is acceptable to substitute a single word by a phrase with same syntactic category in a sentenceformally we define the dependency parsing with phrase nodes as phrase dependency parsinga dependency relationship which is an asymmetric binary relationship holds between two phrasesone is called head which is the central phrase in the relationthe other phrase is called dependent which modifies the heada label representing the relation type is assigned to each dependency relationship such as subj obj and so onfig2 shows an example of phrase dependency parsing resultby comparing the phrase dependency tree and the word level dependency tree in fig2 the former delivers a more succinct tree structurelocal words in same phrase are compacted into a single nodethese words provide local syntactic and semantic effects which enrich the phrase they belong tobut they should have limited influences on the global tree topology especially in applications which emphasis the whole tree structures such as tree kernelspruning away local dependency relations by additional phrase structure information phrase dependency parsing accelerates following processing of opinion relation extractionto construct phrase dependency tree we propose a method which combines results from an existing shallow parser and a lexical dependency parsera phrase dependency tree is defined as t where v is the set of phrases e is the dependency relations among the phrases in v representing by direct edgesto reserve the word level dependencies inside a phrase we define a nested structure for a phrase ti in v ti vi v1 v2 vm is the internal words ei is the internal dependency relationswe conduct the phrase dependency parsing in this way traverses word level dependency tree in preorder when visits a node r searches in its children and finds the node set d which are in the same phrase with r according algorithm 1 pseudocode for constructing the phrase dependency tree input output phrase dependency tree t where to the shallow parsing resultcompacts d and r into a single nodethen traverses all the remaining children in the same waythe algorithm is shown in alg1the output of the algorithm is still a tree for we only cut edges which are compacted into a phrase the connectivity is keepednote that there will be inevitable disagrees between shallow parser and lexical dependency parser the algorithm implies that we simply follow the result of the latter one the phrases from shallow parser will not appear in the final result if they cannot be found in the procedureconsider the following example fig2 shows the procedure of phrase dependency parsingfig2 is the result of the lexical dependency parsershallow parsers result is shown in fig2chunk phrases np vp and np are nodes in the output phrase dependency treewhen visiting node enjoyed in fig2 the shallow parser tells that really and using which are children of enjoy are in the same phrase with their parent then the three nodes are packedthe final phrase dependency parsing tree is shown in the fig2in this work we define that product features are products product parts properties of products properties of parts company names and related objectsfor examplein consumer electronic domain canon powershot image qualitycamera laptop are all product featuresfrom analyzing the labeled corpus we observe that more than 98 of product features are in a single phrase which is either noun phrase or verb phrase based on it all nps and vps are selected as candidate product featureswhile prepositional phrases and adjectival phrases are excludedalthough it can cover nearly all the true product features the precision is relatively lowthe large amount of noise candidates may confuse the relation extraction classifierto shrink the size of candidate set we introduce language model by an intuition that the more likely a phrase to be a product feature the more closely it related to the product reviewin practice for a certain domain of product reviews a language model is build on easily acquired unlabeled dataeach candidate np or vp chunk in the output of shallow parser is scored by the model and cut off if its score is less than a thresholdopinion expressions are spans of text that express a comment or attitude of the opinion holder which are usually evaluative or subjective phraseswe also analyze the labeled corpus for opinion expressions and observe that many opinion expressions are used in multiple domains which is identical with the conclusion presented by kobayashi et al they collected 5550 opinion expressions from various sources the coverage of the dictionary is high in multiple domainsmotivated by those observations we use a dictionary which contains 8221 opinion expressions to select candidates an assumption we use to filter candidate opinion expressions is that opinion expressions tend to appear closely with product features which is also used to extract product features by hu and liu in our experiments the tree distance between product feature and opinion expression in a relation should be less than 5 in the phrase dependency parsing treethis section describes our method on extracting relations between opinion expressions and product features using phrase dependency treemanually built patterns were used in previous works which have an obvious drawback that those patterns can hardly cover all possible situationsby taking advantage of the kernel methods which can search a feature space much larger than that could be represented by a feature extractionbased approach we define a new tree kernel over phrase dependency trees and incorporate this kernel within an svm to extract relations between opinion expressions and product featuresthe potential relation set consists of the all combinations between candidate product features and candidate opinion expressions in a sentencegiven a phrase dependency parsing tree we choose the subtree rooted at the lowest common parent of opinion expression and product feature to represent the relationdependency tree kernels has been proposed by their kernel is defined on lexical dependency tree by the convolution of similarities between all possible subtreeshowever if the convolution containing too many irrelevant subtrees overfitting may occur and decreases the performance of the classifierin phrase dependency tree local words in a same phrase are compacted therefore it provides a way to treat local dependencies and global dependencies differently as a consequence these two kinds of dependencies will not disturb each other in measuring similaritylater experiments prove the validity of this statementwe generalize the definition by to fit the phrase dependency treeuse the symbols in section 212 9 i and 9j are two trees with root ri and rj k is the kernel function for themfirstly each tree node tk e 9i is augmented with a set of features f and an instance of f for tk is fk fka match function m is defined on comparing a subset of nodes features m c_ f and in the same way a similarity function s are defined on 5 c f where i 1 if fis fs c for the given phrase dependency parsing trees the kernel function k is defined as folkc is the kernel function over ri and rjs childrendenote a is a continuous subsequence of indices a a 1 a l for ris children where l is its length as is the sth element in aand likewise b for rj where the constant 0 a 1 normalizes the effects of children subsequences lengthcompared with the definitions in we add term kin to handle the internal nodes of a pharse and make this extension still satisfy the kernel function requirements the consideration is that the local words should have limited effects on whole tree structuresso the kernel is defined on external children and internal nodes separately annotator extracted 3595 relations while the other annotator a2 extracted 3745 relations an a1 d 3217 cases of them matchedin order to measure the annotation quality we use the following metric to measure the interannotator agreement which is also used by wiebe et al as the result the local words are not involved in in this section we describe the annotated corpus and experiment configurations including baseline we conducted experiments with labeled corpus which are selected from hu and liu jindal and liu have builttheir documents are collected from amazoncom and cnetcom where products have a large number of reviewsthey also manually labeled product features and polarity orientationsour corpus is selected from them which contains customer reviews of 11 products belong to 5 categoriestable 1 gives the detail statisticssince we need to evaluate not only the product features but also the opinion expressions and relations between them we asked two annotators to annotate them independentlythe annotators started from identifying product featuresthen for each product feature they annotated the opinion expression which has relation with itfinally one features for match function where agr represents the interannotator agreement between annotator a and b a and b are the sets of anchors annotated by annotators a and b agr was 859 and agr was 895it indicates that the reliability of our annotated corpus is satisfactoryresults of extracting product features and opinion expressions are shown in table 2we use precision recall and fmeasure to evaluate performancesthe candidate product features are extracted by the method described in section 22 whose result is in the first row6760 of 24414 candidate product features remained after the filtering which means we cut 72 of irrelevant candidates with a cost of 145 loss in true answerssimilar to the product feature extraction the precision of extracting opinion expression is relatively low while the recall is 752since both product features and opinion expressions extractions are preprocessing steps recall is more importantin order to compare with stateoftheart results we also evaluated the following methodstable 5 shows the performances of different relation extraction methods with indomain datafor each domain we conducted 5fold cross validationtable 6 shows the performances of the extraction methods on crossdomain datawe use the digital camera and cell phone domain as training setthe other domains are used as testing settable 5 presents different methods results in five domainswe observe that the three learning based methods perform better than the adjacent baseline in the first three domainshowever in other domains directly adjacent method is better than the learning based methodsthe main difference between the first three domains and the last two domains is the size of datait implies that the simple adjacent method is also competent when the training set is smalla further inspection into the result of first 3 domains we can also conclude that 1 tree kernels are better than adjacent svm1 and svm2 in all domainsit proofs that the dependency tree is important in the opinion relation extractionthe reason for that is a connection between an opinion and its target can be discovered with various syntactic structures2 the kernel defined on phrase dependency tree outperforms kernel defined on word level dependency tree by 48 in averagewe believe the main reason is that phrase dependency tree provides a more succinct tree structure and the separative treatment of local dependencies and global dependencies in kernel computation can indeed improve the performance of relation extractionto analysis the results of preprocessing steps influences on the following relation extraction we provide 2 additional experiments which the product features and opinion expressions are all correctly extracted respectively oeright and pfrightthese two results show that given an exactly extraction of opinion expression and product feature the results of opinion relation extraction will be much betterfurther opinion expressions are more influential which naturally means the opinion expressions are crucial in opinion relation extractionfor evaluations on cross domain the adjacent method does not need training data its results are the same as the indomain experimentsnote in table 3 and table 4 we do not use domain related features in svm1 svmwtree svmptree but svm2s features are domain dependentsince the crossdomain training set is larger than the original one in diaper and dvd domain the models are trained more sufficientlythe final results on crossdomain are even better than indomain experiments on svm1 svmwtree and svmptree with percentage of 46 86 103 in averageand the crossdomain training set is smaller than indomain in mp3 but it also achieve competitive performance with the indomainon the other hand svm2s result decreased compared with the indomain experiments because the test domain changedat the same time svmptree outperforms other methods which is similar in indomain experimentsopinion mining has recently received considerable attentionamount of works have been done on sentimental classification in different levels while we focus on extracting product features opinion expressions and mining relations in this paperkobayashi et al presented their work on extracting opinion units including opinion holder subject aspect and evaluationsubject and aspect belong to product features while evaluation is the opinion expression in our workthey converted the task to two kinds of relation extraction tasks and proposed a machine learningbased method which combines contextual clues and statistical cluestheir experimental results showed that the model using contextual clues improved the performancehowever since the contextual information in a domain is specific the model got by their approach can not easily converted to other domainschoi et al used an integer linear programming approach to jointly extract entities and relations in the context of opinion oriented information extractionthey identified expressions of opinions sources of opinions and the linking relation that exists between themthe sources of opinions denote to the person or entity that holds the opinionanother area related to our work is opinion expressions identification they worked on identifying the words and phrases that express opinions in textaccording to wiebe et al there are two types of opinion expressions direct subjective expressions and expressive subjective elementsin this paper we described our work on mining opinions from unstructured documentswe focused on extracting relations between product features and opinion expressionsthe novelties of our work included 1 we defined the phrase dependency parsing and proposed an approach to construct the phrase dependency trees 2 we proposed a new tree kernel function to model the phrase dependency treesexperimental results show that our approach improved the performances of the mining taskthis work was funded by chinese nsf 60673038 doctoral fund of ministry of education of china 200802460066 and shanghai science and technology development funds 08511500302the authors would like to thank the reviewers for their useful comments
D09-1159
phrase dependency parsing for opinion miningin this paper we present a novel approach for mining opinions from product reviews where it converts opinion mining task to identify product features expressions of opinions and relations between themby taking advantage of the observation that a lot of product features are phrases a concept of phrase dependency parsing is introduced which extends traditional dependency parsing to phrase levelthis concept is then implemented for extracting relations between product features and expressions of opinionsexperimental evaluations show that the mining task can benefit from phrase dependency parsingwe utilize the dependency parser to extract the noun phrases and verb phrases from the reviews as the aspect candidatesfor a monolingual task we use a shallow parser to convert lexical dependencies from a dependency parser into phrase dependencies
on dual decomposition and linear programming relaxations for natural language processing paper introduces decomposition a framework for deriving inference algorithms for nlp problems the approach relies on standard dynamicprogramming algorithms as oracle solvers for subproblems together with a simple method for forcing agreement between the different oracles the approach provably solves a linear programming relaxation of the global inference problem it to algorithms that are in that they existing decoding algorithms in that they avoid exact algorithms for the full and often in that empirically they often recover the correct solution in spite of using an lp relaxation we give experimental results on two problems 1 the combination of two lexicalized parsing models and 2 the combination of a lexicalized parsing model and a trigram partofspeech tagger dynamic programming algorithms have been remarkably useful for inference in many nlp problemsunfortunately as models become more complex for example through the addition of new features or components dynamic programming algorithms can quickly explode in terms of computational or implementational complexity1 as a result efficiency of inference is a critical bottleneck for many problems in statistical nlpthis paper introduces dual decomposition as a framework for deriving inference algorithms in nlpdual decomposition leverages the observation that complex inference problems can often be decomposed into efficiently solvable subproblemsthe approach leads to inference algorithms with the following properties the approach is very general and should be applicable to a wide range of problems in nlpthe connection to linear programming ensures that the algorithms provide a certificate of optimality when they recover the exact solution and also opens up the possibility of methods that incrementally tighten the lp relaxation until it is exact the structure of this paper is as followswe first give two examples as an illustration of the approach 1 integrated parsing and trigram partofspeech tagging and 2 combined phrasestructure and dependency parsingin both settings it is possible to solve the integrated problem through an intersected dynamic program can be usedhowever these methods although polynomial time are substantially less efficient than our algorithms and are considerably more complex to implementnext we describe exact polyhedral formulations for the two problems building on connections between dynamic programming algorithms and marginal polytopes as described in martin et al these allow us to precisely characterize the relationship between the exact formulations and the lp relaxations that we solvewe then give guarantees of convergence for our algorithms by showing that they are instantiations of lagrangian relaxation a general method for solving linear programs of a particular formfinally we describe experiments that demonstrate the effectiveness of our approachfirst we consider the integration of the generative model for phrasestructure parsing of collins with the secondorder discriminative dependency parser of koo et al this is an interesting problem in its own right the goal is to inject the high performance of discriminative dependency models into phrasestructure parsingthe method uses offtheshelf decoders for the two modelswe find three main results 1 in spite of solving an lp relaxation empirically the method finds an exact solution on over 99 of the examples 2 the method converges quickly typically requiring fewer than 10 iterations of decoding 3 the method gives gains over a baseline method that forces the phrasestructure parser to produce the same dependencies as the firstbest output from the dependency parser model has an f1 score of 881 the baseline method has an f1 score of 897 and the dual decomposition method has an f1 score of 907in a second set of experiments we use dual decomposition to integrate the trigram pos tagger of toutanova and manning with the parser of collins we again find that the method finds an exact solution in almost all cases with convergence in just a few iterations of decodingalthough the focus of this paper is on dynamic programming algorithmsboth in the experiments and also in the formal results concerning marginal polytopesit is straightforward to use other combinatorial algorithms within the approachfor example koo et al describe a dual decomposition approach for nonprojective dependency parsing which makes use of both dynamic programming and spanning tree inference algorithmsdual decomposition is a classical method for solving optimization problems that can be decomposed into efficiently solvable subproblemsour work is inspired by dual decomposition methods for inference in markov random fields in this approach the mrf is decomposed into subproblems corresponding to treestructured subgraphs that together cover all edges of the original graphthe resulting inference algorithms provably solve an lp relaxation of the mrf inference problem often significantly faster than commercial lp solvers our work is also related to methods that incorporate combinatorial solvers within loopy belief propagation either for map inference or for computing marginals our approach similarly makes use of combinatorial algorithms to efficiently solve subproblems of the global inference problemhowever unlike lbp our algorithms have strong theoretical guarantees such as guaranteed convergence and the possibility of a certificate of optimalitythese guarantees are possible because our algorithms directly solve an lp relaxationother work has considered lp or integer linear programming formulations of inference in nlp these approaches typically use generalpurpose lp or ilp solversour method has the advantage that it leverages underlying structure arising in lp formulations of nlp problemswe will see that dynamic programming algorithms such as cky can be considered to be very efficient solvers for particular lpsin dual decomposition these lpsand their efficient solverscan be embedded within larger lps corresponding to more complex inference problemswe now describe the type of models used throughout the paperwe take some care to set up notation that will allow us to make a clear connection between inference problems and linear programmingour first example is weighted cfg parsingwe assume a contextfree grammar in chomsky normal form with a set of nonterminals n the grammar contains all rules of the form a b c and a w where a b c e n and w e v for rules of the form a w we refer to a as the partofspeech tag for w we allow any nonterminal to be at the root of the treegiven a sentence with n words w1 w2 wn a parse tree is a set of rule productions of the form ha b c i k ji where a b c n and 1 i k 0 is the step size of the updatethe complete subgradient algorithm is given in figure 4the following convergence theorem is wellknown theorem 61 if limk 0 and oo then limk l lthe following proposition is easily verified proposition 61 the algorithm in figure is an instantiation of the algorithm in figure 48 with x2 conv and the matrices e and f defined to be binary matrices specifying the constraints µ v for all e zuniunder an appropriate definition of the step sizes it follows that the algorithm in figure 1 defines a sequence of lagrange multiplers you minimizing a dual of the lp relaxation in eq10a similar result holds for the algori ginal inference problem if case 1 does not arise then a couple of strategies are possiblethe first is to define the primal solution to be the average of the solutions enour first set of experiments considers the integration of model 1 of collins 111 and the 2nd order discriminative dependency parser of koo et al the inference problem for a sentence x is to find where y is the set of all lexicalized phrasestructure trees for the sentence x f1 is the score under model 1 f2 is the score under koo et al for the dependency structure implied by y and γ 0 is a parameter dictating the relative weight of the two models12 this problem is similar to the second example in section 4 a very similar dual decomposition algorithm to that described in section 42 can be derivedwe used the penn wall street treebank for the experiments with sections 221 for training section 22 for development and section 23 for testingthe parameter γ was chosen to optimize performance on the development setwe ran the dual decomposition algorithm with a limit of k 50 iterationsthe dual decomposition algorithm returns an exact solution if case 1 occurs as defined in section 62 we found that of 2416 sentences in section 23 case 1 occurred for 2407 sentencestable 1 gives statistics showing the number of iterations required for convergenceover 80 of the examples converge in 5 iterations or fewer over 90 converge in 10 iterations or fewerwe compare the accuracy of the dual decomposition approach to two baselines first model 1 and second a naive integration method that enforces the hard constraint that model 1 must only consider de11we use a reimplementation that is a slight modification of collins model 1 with very similar performance and which uses the tag formalism of carreras et al pendencies seen in the firstbest output from the dependency parsertable 2 shows all three resultsthe dual decomposition method gives a significant gain in precision and recall over the naive combination method and boosts the performance of model 1 to a level that is close to some of the best singlepass parsers on the penn treebank test setdependency accuracy is also improved over the koo et al model in spite of the relatively low dependency accuracy of model 1 alonefigure 5 shows performance of the approach as a function of k the maximum number of iterations of dual decompositionfor this experiment for cases where the method has not converged for k l for k0 k this learning rate drops at a rate of 121 where t is the number of times that the dual increases from one iteration to the nextsee koo et al for a similar but less aggressive step size used to solve a different task
D10-1001
on dual decomposition and linear programming relaxations for natural language processingthis paper introduces dual decomposition as a framework for deriving inference algorithms for nlp problemsthe approach relies on standard dynamicprogramming algorithms as oracle solvers for subproblems together with a simple method for forcing agreement between the different oraclesthe approach provably solves a linear programming relaxation of the global inference problemit leads to algorithms that are simple in that they use existing decoding algorithms efficient in that they avoid exact algorithms for the full model and often exact in that empirically they often recover the correct solution in spite of using an lp relaxationwe give experimental results on two problems 1 the combination of two lexicalized parsing models and 2 the combination of a lexicalized parsing model and a trigram partofspeech taggerwe use the highest scoring output of the parsing submodel over all iterations
discriminative instance weighting for domain adaptation in statistical machine translation we describe a new approach to smt adaptation that weights outofdomain phrase pairs according to their relevance to the target domain determined by both how similar to it they appear to be and whether they belong to general language or not this extends previous work on discriminative weighting by using a finer granularity focusing on the properties of instances rather than corpus components and using a simpler training procedure we incorporate instance weighting into a mixturemodel framework and find that it yields consistent improvements over a wide range of baselines domain adaptation is a common concern when optimizing empirical nlp applicationseven when there is training data available in the domain of interest there is often additional data from other domains that could in principle be used to improve performancerealizing gains in practice can be challenging however particularly when the target domain is distant from the background datafor developers of statistical machine translation systems an additional complication is the heterogeneous nature of smt components which precludes a single universal approach to adaptationin this paper we study the problem of using a parallel corpus from a background domain to improve performance on a target domain for which a smaller amount of parallel training materialthough adequate for reasonable performanceis also availablethis is a standard adaptation problem for smtit is difficult when in and out are dissimilar as they are in the cases we studyfor simplicity we assume that out is homogeneousthe techniques we develop can be extended in a relatively straightforward manner to the more general case when out consists of multiple subdomainsthere is a fairly large body of work on smt adaptationwe introduce several new ideasfirst we aim to explicitly characterize examples from out as belonging to general language or notprevious approaches have tried to find examples that are similar to the target domainthis is less effective in our setting where in and out are disparatethe idea of distinguishing between general and domainspecific examples is due to daume and marcu who used a maximumentropy model with latent variables to capture the degree of specificitydaume applies a related idea in a simpler way by splitting features into general and domainspecific versionsthis highly effective approach is not directly applicable to the multinomial models used for core smt components which have no natural method for combining split features so we rely on an instanceweighting approach to downweight domainspecific examples in outwithin this framework we use features intended to capture degree of generality including the output from an svm classifier that uses the intersection between in and out as positive examplesour second contribution is to apply instance weighting at the level of phrase pairssentence pairs are the natural instances for smt but sentences often contain a mix of domainspecific and general languagefor instance the sentence similar improvements in haemoglobin levels were reported in the scientific literature for other epoetins would likely be considered domainspecific despite the presence of general phrases like were reported inphraselevel granularity distinguishes our work from previous work by matsoukas et al who weight sentences according to subcorpus and genre membershipfinally we make some improvements to baseline approacheswe train linear mixture models for conditional phrase pair probabilities over in and out so as to maximize the likelihood of an empirical joint phrasepair distribution extracted from a development setthis is a simple and effective alternative to setting weights discriminatively to maximize a metric such as bleua similar maximumlikelihood approach was used by foster and kuhn but for language models onlyfor comparison to informationretrieval inspired baselines eg we select sentences from out using language model perplexities from inthis is a straightforward technique that is arguably better suited to the adaptation task than the standard method of treating representative in sentences as queries then pooling the match resultsthe paper is structured as followssection 2 describes our baseline techniques for smt adaptation and section 3 describes the instanceweighting approachexperiments are presented in section 4section 5 covers relevant previous work on smt adaptation and section 6 concludesstandard smt systems have a hierarchical parameter structure toplevel loglinear weights are used to combine a small set of complex features interpreted as log probabilities many of which have their own internal parameters and objectivesthe toplevel weights are trained to maximize a metric such as bleu on a small development set of approximately 1000 sentence pairsthus provided at least this amount of in data is availableas it is in our settingadapting these weights is straightforwardwe focus here instead on adapting the two most important features the language model which estimates the probability p of a target word w following an ngram h and the translation models p and p which give the probability of source phrase s translating to target phrase t and vice versawe do not adapt the alignment procedure for generating the phrase table from which the tm distributions are derivedthe natural baseline approach is to concatenate data from in and outits success depends on the two domains being relatively close and on the out corpus not being so large as to overwhelm the contribution of inwhen out is large and distinct its contribution can be controlled by training separate in and out models and weighting their combinationan easy way to achieve this is to put the domainspecific lms and tms into the toplevel loglinear model and learn optimal weights with mert this has the potential drawback of increasing the number of features which can make mert less stable apart from mert difficulties a conceptual problem with loglinear combination is that it multiplies feature probabilities essentially forcing different features to agree on highscoring candidatesthis is appropriate in cases where it is sanctioned by bayes law such as multiplying lm and tm probabilities but for adaptation a more suitable framework is often a mixture model in which each event may be generated from some domainthis leads to a linear combination of domainspecific probabilities with weights in 0 1 normalized to sum to 1linear weights are difficult to incorporate into the standard mert procedure because they are hidden within a toplevel probability that represents the linear combination1 following previous work we circumvent this problem by choosing weights to optimize corpus loglikelihood which is roughly speaking the training criterion used by the lm and tm themselvesfor the lm adaptive weights are set as follows where α is a weight vector containing an element αi for each domain pi are the corresponding domainspecific models and p is an empirical distribution from a targetlanguage training corpuswe used the in dev set for thisit is not immediately obvious how to formulate an equivalent to equation for an adapted tm because there is no welldefined objective for learning tms from parallel corporathis has led previous workers to adopt ad hoc linear weighting schemes however we note that the final conditional estimates p from a given phrase table maximize the likelihood of joint empirical phrase pair counts over a wordaligned corpusthis suggests a direct parallel to where p is a joint empirical distribution extracted from the in dev set using the standard procedure2 an alternative form of linear combination is a maximum a posteriori combination for the tm this is where ci is the count in the in phrase table of pair po is its probability under the out tm and ci quots cithis is motivated by taking β po to be the parameters of a dirichlet prior on phrase probabilities then maximizing posterior estimates p given the in corpusintuitively it places more weight on out when less evidence from in is availableto set β we used the same criterion as for α over a dev corpus the map combination was used for tm probabilities only in part due to a technical difficulty in formulating coherent counts when using standard lm smoothing techniques 3 motivated by information retrieval a number of approaches choose relevant sentence pairs from out by matching individual source sentences from in or individual target hypotheses the matching sentence pairs are then added to the in corpus and the system is retrainedalthough matching is done at the sentence level this information is subsequently discarded when all matches are pooledto approximate these baselines we implemented a very simple sentence selection algorithm in which parallel sentence pairs from out are ranked by the perplexity of their target half according to the in language modelthe number of topranked pairs to retain is chosen to optimize devset bleu scorethe sentenceselection approach is crude in that it imposes a binary distinction between useful and nonuseful parts of outmatsoukas et al generalize it by learning weights on sentence pairs that are used when estimating relativefrequency phrasepair probabilitiesthe weight on each sentence is a value in 0 1 computed by a perceptron with boolean features that indicate collection and genre membershipwe extend the matsoukas et al approach in several waysfirst we learn weights on individual phrase pairs rather than sentencesintuitively as suggested by the example in the introduction this is the right granularity to capture domain effectssecond rather than relying on a division of the corpus into manuallyassigned portions we use features intended to capture the usefulness of each phrase pairfinally we incorporate the instanceweighting model into a general linear combination and learn weights and mixing parameters simultaneously where cλ is a modified count for pair in out you is a prior distribution and y is a prior weightthe original out counts co are weighted by a logistic function wλ to motivate weighting joint out counts as in we begin with the ideal objective for setting multinomial phrase probabilities 0 p dst which is the likelihood with respect to the true in distribution pijiang and zhai suggest the following derivation making use of the true out distribution po where each fi is a feature intended to charac 0ˆ argmax pf log pθ terize the usefulness of weighted by ai θ st pfpo log pθ the mixing parameters and feature weights lectively 0 are optimized simultaneously using dev θ st pfco log pθ set maximum likelihood as before argmax po θ st ˆ argmax p log p φ st this is a somewhat less direct objective than used by matsoukas et al who make an iterative approximation to expected terhowever it is robust efficient and easy to implement4 to perform the maximization in we used the popular lbfgs algorithm which requires gradient informationdropping the conditioning on 0 for brevity and letting cλ cλ yu and cλ 4note that the probabilities in need only be evaluated over the support of p which is quite small when this distribution is derived from a dev setmaximizing is thus much faster than a typical mert run where co are the counts from out as in this has solutions where pi is derived from the in corpus using relativefrequency estimates and po is an instanceweighted model derived from the out corpusthis combination generalizes and we use either at a to obtain a fixedweight linear combination or at ci 0 to obtain a map combinationwe model po using a map criterion over weighted phrasepair counts and from the similarity to assuming y 0 we see that wλ can be interpreted as approximating pfpothe logistic function whose outputs are in 0 1 forces pp _ pothis is not unreasonable given the application to phrase pairs from out but it suggests that an interesting alternative might be to use a plain loglinear weighting function exp with outputs in 0 oowe have not yet tried thisan alternate approximation to would be to let w directly approximate pˆiwith the additional assumption that can be restricted to the support of co this is equivalent to a flat alternative to in which each nonzero co is set to onethis variant is tested in the experiments belowa final alternate approach would be to combine weighted joint frequencies rather than conditional estimates ie ci wco suitably normalized5 such an approach could be simulated by a mapstyle combination in which separate 0 values were maintained for each t this would make the model more powerful but at the cost of having to learn to downweight out separately for each t which we suspect would require more training data for reliable performancewe have not explored this strategywe used 22 features for the logistic weighting model divided into two groups one intended to reflect the degree to which a phrase pair belongs to general language and one intended to capture similarity to the in domainthe 14 generallanguage features embody straightforward cues frequency centrality as reflected in model scores and lack of burstinessthey are 5we are grateful to an anonymous reviewer for pointing this out6one of our experimental settings lacks document boundaries and we used this approximation in both settings for consistencythe 8 similaritytoin features are based on word frequencies and scores from various models trained on the in corpus to avoid numerical problems each feature was normalized by subtracting its mean and dividing by its standard deviationin addition to using the simple features directly we also trained an svm classifier with these features to distinguish between in and out phrase pairsphrase tables were extracted from the in and out training corpora and phrase pairs in the intersection of the in and out phrase tables were used as positive examples with two alternate definitions of negative examples the classifier trained using the 2nd definition had higher accuracy on a development setwe used it to score all phrase pairs in the out table in order to provide a feature for the instanceweighting modelwe carried out translation experiments in two different settingsthe first setting uses the european medicines agency corpus as in and the europarl corpus as out for englishfrench translation in both directionsthe dev and test sets were randomly chosen from the emea corpusfigure 1 shows sample sentences from these domains which are widely divergentthe second setting uses the newsrelated subcorpora for the nist09 mt chinese to english evaluation8 as in and the remaining nist parallel chineseenglish corpora as outthe dev corpus was taken from the nist05 evaluation set augmented with some randomlyselected material reserved from the training setthe nist06 and nist08 evaluation sets were used for testingcompared to the emeaep setting the two domains in the nist setting are less homogeneous and more similar to each other there is also considerably more in text availablethe corpora for both settings are summarized in table 1the reference medicine for silapo is eprexerypo which contains epoetin alfale medicament de reference de silapo est eprexerypo qui contient de lepoetine alfa i would also like to point out to commissioner liikanen that it is not easy to take a matter to a national courtje voudrais preciser a ladresse du commissaire liikanen quil nest pas aise de recourir aux tribunaux nationauxwe used a standard onepass phrasebased system with the following features relativefrequency tm probabilities in both directions a 4gram lm with kneserney smoothing worddisplacement distortion model and word countfeature weights were set using ochs mert algorithm the corpus was wordaligned using both hmm and ibm2 models and the phrase table was the union of phrases extracted from these separate alignments with a length limit of 7it was filtered to retain the top 30 translations for each source phrase using the tm part of the current loglinear modeltable 2 shows results for both settings and all methods described in sections 2 and 3the 1st block contains the simple baselines from section 21the natural baseline outperforms the pure in system only for emeaep frenloglinear combination improves on this in all cases and also beats the pure in systemthe 2nd block contains the ir system which was tuned by selecting text in multiples of the size of the emea training corpus according to dev set performancethis significantly underperforms loglinear combinationthe 3rd block contains the mixture baselinesthe linear lm tm and map tm used with nonadapted counterparts perform in all cases slightly worse than the loglinear combination which adapts both lm and tm componentshowever when the linear lm is combined with a linear tm or map tm the results are much better than a loglinear combination for the emea setting and on a par for nistthis is consistent with the nature of these two settings loglinear combination which effectively takes the intersection of in and out does relatively better on nist where the domains are broader and closer togethersomewhat surprisingly there do not appear to be large systematic differences between linear and map combinationsthe 4th block contains instanceweighting models trained on all features used within a map tm combination and with a linear lm mixturethe iw all map variant uses a non0 y weight on a uniform prior in p and outperforms a version with y 0 and the flattened variant described in section 32clearly retaining the original frequencies is important for good performance and globally smoothing the final weighted frequencies is crucialthis best instanceweighting model beats the equivalant model without instance weights by between 06 bleu and 18 bleu and beats the loglinear baseline by a large marginthe final block in table 2 shows models trained on feature subsets and on the svm feature described in 34the generallanguage features have a slight advantage over the similarity features and both are better than the svm featurewe have already mentioned the closely related work by matsoukas et al on discriminative corpus weighting and jiang and zhai on instance weightingit is difficult to directly compare the matsoukas et al results with ours since our outofdomain corpus is homogeneous given heterogeneous training data however it would be trivial to include matsoukasstyle identity features in our instanceweighting modelalthough these authors report better gains than ours they are with respect to a nonadapted baselinefinally we note that jiangs instanceweighting framework is broader than we have presented above encompassing among other possibilities the use of unlabelled in data which is applicable to smt settings where sourceonly in corpora are availableit is also worth pointing out a connection with daumes work that splits each feature into domainspecific and general copiesat first glance this seems only peripherally related to our work since the specificgeneral distinction is made for features rather than instanceshowever for multinomial models like our lms and tms there is a one to one correspondence between instances and features eg the correspondence between a phrase pair and its conditional multinomial probability pas mentioned above it is not obvious how to apply daumes approach to multinomials which do not have a mechanism for combining split featuresrecent work by finkel and manning which recasts daumes approach in a hierarchical map framework may be applicable to this problemmoving beyond directly related work major themes in smt adaptation include the ir and mixture approaches for lms and tms described above as well as methods for exploiting monolingual indomain text typically by translating it automatically and then performing self training there has also been some work on adapting the word alignment model prior to phrase extraction and on dynamically choosing a dev set other work includes transferring latent topic distributions from source to target language for lm adaptation and adapting features at the sentence level to different categories of sentence in this paper we have proposed an approach for instanceweighting phrase pairs in an outofdomain corpus in order to improve indomain performanceeach outofdomain phrase pair is characterized by a set of simple features intended to reflect how useful it will bethe features are weighted within a logistic model to give an overall weight that is applied to the phrase pairs frequency prior to making mapsmoothed relativefrequency estimates these estimates are in turn combined linearly with relativefrequency estimates from an indomain phrase tablemixing smoothing and instancefeature weights are learned at the same time using an efficient maximumlikelihood procedure that relies on only a small indomain development corpuswe obtained positive results using a very simple phrasebased system in two different adaptation settings using englishfrench europarl to improve a performance on a small specialized medical domain and using nonnews portions of the nist09 training material to improve performance on the newsrelated corporain both cases the instanceweighting approach improved over a wide range of baselines giving gains of over 2 bleu points over the best nonadapted baseline and gains of between 06 and 18 over an equivalent mixture model in future work we plan to try this approach with more competitive smt systems and to extend instance weighting to other standard smt components such as the lm lexical phrase weights and lexicalized distortionwe will also directly compare with a baseline similar to the matsoukas et al approach in order to measure the benefit from weighting phrase pairs rather than full sentencesfinally we intend to explore more sophisticated instanceweighting features for capturing the degree of generality of phrase pairs
D10-1044
discriminative instance weighting for domain adaptation in statistical machine translationwe describe a new approach to smt adaptation that weights outofdomain phrase pairs according to their relevance to the target domain determined by both how similar to it they appear to be and whether they belong to general language or notthis extends previous work on discriminative weighting by using a finer granularity focusing on the properties of instances rather than corpus components and using a simpler training procedurewe incorporate instance weighting into a mixturemodel framework and find that it yields consistent improvements over a wide range of baselineswe rank the sentence pairs in the generaldomain corpus according to the perplexity scores of sentences which are computed with respect to indomain language modelswe apply linear interpolation to combine the instance weighted outofdomain model with an indomain modelwe propose a method for machine translation that uses features to capture degrees of generality
a multipass sieve for coreference resolution most coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or features this approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision ones to overcome this problem we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precision each tier builds on the previous tiers entity cluster output further our model propagates global information by sharing attributes across mentions in the same cluster this cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the time the framework is highly modular new coreference modules can be plugged in without any change to the other modules in spite of its simplicity our approach outperforms many stateoftheart supervised and unsupervised models on several standard corpora this suggests that sievebased approaches could be applied to other nlp tasks recent work on coreference resolution has shown that a rich feature space that models lexical syntactic semantic and discourse phenomena is crucial to successfully address the task when such a rich representation is available even a simple deterministic model can achieve stateoftheart performance by and large most approaches decide if two mentions are coreferent using a single function over all these features and information local to the two mentions1 this is problematic for two reasons lower precision features may overwhelm the smaller number of high precision ones and local information is often insufficient to make an informed decisionconsider this example the second attack occurred after some rocket firings aimed apparently toward the israelis apparently in retaliationwe are checking our facts on that one the president quoted by ari fleischer his spokesman is saying he is concerned the strike will undermine efforts by palestinian authorities to bring an end to terrorist attacks and does not contribute to the security of israelmost stateoftheart models will incorrectly link we to the israelis because of their proximity and compatibility of attributes in contrast a more cautious approach is to first cluster the israelis with israel because the demonymy relation is highly precisethis initial clustering step will assign the correct animacy attribute to the corresponding geopolitical entity which will prevent the incorrect merging with the mention we in later stepswe propose an unsupervised sievelike approach to coreference resolution that addresses these is1as we will discuss below some approaches use an additional component to infer the overall best mention clusters for a document but this is still based on confidence scores assigned using local information suesthe approach applies tiers of coreference models one at a time from highest to lowest precisioneach tier builds on the entity clusters constructed by previous models in the sieve guaranteeing that stronger features are given precedence over weaker onesfurthermore each models decisions are richly informed by sharing attributes across the mentions clustered in earlier tiersthis ensures that each decision uses all of the information available at the timewe implemented all components in our approach using only deterministic modelsall our components are unsupervised in the sense that they do not require training on gold coreference linksthe contributions of this work are the following we show that a simple scaffolding framework that deploys strong features through tiers of models performs significantly better than a singlepass modeladditionally we propose several simple yet powerful new featuresthis work builds upon the recent observation that strong features outweigh complex models for coreference resolution in both supervised and unsupervised learning setups our work reinforces this observation and extends it by proposing a novel architecture that allows easy deployment of such features and infuses global information that can be readily exploited by these features or constraintsmost coreference resolution approaches perform the task by aggregating local decisions about pairs of mentions two recent works that diverge from this pattern are culotta et al and poon and domingos they perform coreference resolution jointly for all mentions in a document using firstorder probabilistic models in either supervised or unsupervised settingshaghighi and klein propose a generative approach that models entity clusters explicitly using a mostlyunsupervised generative modelas previously mentioned our work is not constrained by firstorder or bayesian formalisms in how it uses cluster informationadditionally the deterministic models in our tiered model are significantly simpler yet perform generally better than the complex inference models proposed in these worksfrom a high level perspective this work falls under the theory of shaping defined as a method of successive approximations for learning this theory is known by different names in many nlp applications brown et al used simple models as stepping stones for more complex word alignment models collins used cautious decision list learning for named entity classification spitkovsky et al used baby steps for unsupervised dependency parsing etcto the best of our knowledge we are the first to apply this theory to coreference resolutionintradocument coreference resolution clusters together textual mentions within a single document based on the underlying referent entitymentions are usually noun phrases headed by nominal or pronominal terminalsto facilitate comparison with most of the recent previous work we report results using gold mention boundarieshowever our approach does not make any assumptions about the underlying mentions so it is trivial to adapt it to predicted mention boundaries for a simple mention detection modelwe used the following corpora for development and evaluation we used the first corpus for developmentthe other corpora are reserved for testingwe parse all documents using the stanford parser the syntactic information is used to identify the mention head words and to define the ordering of mentions in a given sentence for a fair comparison with previous work we do not use gold named entity labels or mention types but instead take the labels provided by the stanford named entity recognizer we use three evaluation metrics widely used in the literature pairwise f1 computed over mention pairs in the same entity cluster muc which measures how many predicted clusters need to be merged to cover the gold clusters and b3 which uses the intersection between predicted and gold clusters for a given mention to mark correct mentions and the sizes of the the predicted and gold clusters as denominators for precision and recall respectivelywe refer the interested reader to for an analysis of these metricsour sieve framework is implemented as a succession of independent coreference modelswe first describe how each model selects candidate mentions and then describe the models themselvesgiven a mention mi each model may either decline to propose a solution or deterministically select a single best antecedent from a list of previous mentions m1 mi1we sort candidate antecedents using syntactic information provided by the stanford parser as follows same sentence candidates in the same sentence are sorted using lefttoright breadthfirst traversal of syntactic trees figure 1 shows an example of candidate ordering based on this traversalthe lefttoright ordering favors subjects which tend to appear closer to the beginning of the sentence and are more probable antecedentsthe breadthfirst traversal promotes syntactic salience by ranking higher noun phrases that are closer to the top of the parse tree if the sentence containing the anaphoric mention contains multiple clauses we repeat the above heuristic separately in each s constituent starting with the one containing the mentionprevious sentence for all nominal mentions we sort candidates in the previous sentences using righttoleft breadthfirst traversalthis guarantees syntactic salience and also favors document proximityfor pronominal mentions we sort candidates in previous sentences using lefttoright traversal in order to favor subjectssubjects are more probable antecedents for pronouns for example this ordering favors the correct candidate for the mention they pepsi says it expects to double quakers snack food growth rate after a monthlong courtship they agreed to buy quaker oatsin a significant departure from previous work each model in our framework gets clustering information for each mention from the earlier coreference models in the multipass systemin other words each mention mi may already be assigned to a cluster cj containing a set of mentions cj mj1 mj mi e cjunassigned mentions are unique members of their own clusterwe use this information in several ways attribute sharing pronominal coreference resolution is severely affected by missing attributes and incorrect attributes to address this issue we perform a union of all mention attributes in a given cluster and share the result with all cluster mentionsif attributes from different mentions contradict each other we maintain all variantsfor example our naive number detection assigns singular to the mention a group of students and plural to five studentswhen these mentions end up in the same cluster the resulting number attributes becomes the set singular pluralthus this cluster can later be merged with both singular and plural pronounsmention selection traditionally a coreference model attempts to resolve every mention in the text which increases the likelihood of errorsinstead in each of our models we exploit the cluster information received from the previous stages by resolving only mentions that are currently first in textual order in their clusterfor example given the following ordered list of mentions mi m2 m3 m4 m5 m6 where the superscript indicates cluster id our model will attempt to resolve only m2 and m4these two are the only mentions that have potential antecedents and are currently marked as the first mentions in their clustersthe intuition behind this heuristic is twofoldfirst early cluster mentions are usually better defined than subsequent ones which are likely to have fewer modifiers or are pronouns several of our models use this modifier informationsecond by definition first mentions appear closer to the beginning of the document hence there are fewer antecedent candidates to select from and fewer opportunities to make a mistakesearch pruning finally we prune the search space using discourse saliencewe disable coreference for first cluster mentions that are or start with indefinite pronouns or start with indefinite articles one exception to this rule is the model deployed in the first pass it only links mentions if their entire extents match exactlythis model is triggered for all nominal mentions regardless of discourse salience because it is possible that indefinite mentions are repeated in a document when concepts are discussed but not instantiated eg a sports bar below we now describe the coreference models implemented in the sievefor clarity we summarize them in table 1 and show the cumulative performance as they are added to the sieve in table 2this model links two mentions only if they contain exactly the same extent text including modifiers and determiners eg the shahab 3 groundground missileas expected this model is extremely precise with a pairwise precision over 96this model links two mentions if any of the conditions below are satisfied appositive the two nominal mentions are in an appositive construction eg israels deputy defense minister ephraim sneh said we use the same syntactic rules to detect appositions as haghighi and klein predicate nominative the two mentions are in a copulative subjectobject relation eg the new yorkbased college board is a nonprofit organization that administers the sats and promotes higher education role appositive the candidate antecedent is headed by a noun and appears as a modifier in an np whose head is the current mention eg actress rebecca schaefferthis feature is inspired by haghighi and klein who triggered it only if the mention is labeled as a person by the nerwe constrain this heuristic more in our work we allow this feature to match only if the mention is labeled as a person the antecedent is animate and the antecedents gender is not neutralrelative pronoun the mention is a relative pronoun that modifies the head of the antecedent np eg the finance street which has already formed in the waitan districtacronym both mentions are tagged as nnp and one of them is an acronym of the other eg agence france presse afpwe use a simple acronym detection algorithm which marks a mention as an acronym of another if its text equals the sequence of upper case characters in the other mentionwe will adopt better solutions for acronym detection in future work demonym one of the mentions is a demonym of the other eg israel israelifor demonym detection we use a static list of countries and their gentilic forms from wikipedia3 all the above features are extremely preciseas shown in table 2 the pairwise precision of the sieve after adding these features is over 95 and recall increases 5 pointslinking a mention to an antecedent based on the naive matching of their head words generates a lot of spurious links because it completely ignores possibly incompatible modifiers for example yale university and harvard university have similar head words but they are obviously different entitiesto address this issue this pass implements several features that must all be matched in order to yield a link cluster head match the mention head word matches any head word in the antecedent clusternote that this feature is actually more relaxed than naive head matching between mention and antecedent candidate because it is satisfied when the mentions head matches the head of any entity in the candidates clusterwe constrain this feature by enforcing a conjunction with the features belowword inclusion all the nonstop4 words in the mention cluster are included in the set of nonstop words in the cluster of the antecedent candidatethis heuristic exploits the property of discourse that it is uncommon to introduce novel information in later mentions typically mentions of the same entity become shorter and less informative as the narrative progressesfor example the two mentions in intervene in the florida supreme courts move does look like very dramatic change made by the florida court point to the same entity but the two mentions in the text below belong to different clusters the pilot had confirmed he had turned onto the correct runway but pilots behind him say he turned onto the wrong runwaycompatible modifiers only the mentions modifiers are all included in the modifiers of the antecedent candidatethis feature models the same discourse property as the previous feature but it focuses on the two individual mentions to be linked rather than their entire clustersfor this feature we only use modifiers that are nouns or adjectivesnot iwithini the two mentions are not in an iwithini construct ie one cannot be a child np in the others np constituent this pass continues to maintain high precision while improving recall significantly passes 4 and 5 are different relaxations of the feature conjunction introduced in pass 3 ie pass 4 removes the compatible modifiers only feature while pass 5 removes the word inclusion constraintall in all these two passes yield an improvement of 17 pairwise f1 points due to recall improvementstable 2 shows that the word inclusion feature is more precise than compatible modifiers only but the latter has better recallthis pass relaxes the cluster head match heuristic by allowing the mention head to match any word in the cluster of the candidate antecedentfor example this heuristic matches the mention sanders to a cluster containing the mentions sauls the judge circuit judge n sanders saulsto maintain high precision this pass requires that both mention and antecedent be labeled as named entities and the types coincidefurthermore this pass implements a conjunction of the above features with word inclusion and not iwithinithis pass yields less than 1 point improvement in most metricswith one exception all the previous coreference models focus on nominal coreference resolutionhowever it would be incorrect to say that our framework ignores pronominal coreference in the first six passesin fact the previous models prepare the stage for pronominal coreference by constructing precise clusters with shared mention attributesthese are crucial factors for pronominal coreferencelike previous work we implement pronominal coreference resolution by enforcing agreement constraints between the coreferent mentionswe use the following attributes for these constraints number we assign number attributes based on a static list for pronouns ner labels mentions marked as a named entity are considered singular with the exception of organizations which can be both singular or plural part of speech tags nns tags are plural and all other nn tags are singular and a static dictionary from gender we assign gender attributes from static lexicons from person we assign person attributes only to pronounshowever we do not enforce this constraint when linking two pronouns if one appears within quotesthis is a simple heuristic for speaker detection eg i and she point to the same person in i voted my conscience she saidanimacy we set animacy attributes using a static list for pronouns ner labels eg person is animate whereas location is not and a dictionary boostrapped from the web ner label from the stanford nerif we cannot detect a value we set attributes to unknown and treat them as wildcards ie they can match any other valuethis final model raises the pairwise recall of our system almost 22 percentage points with only an 8 point drop in pairwise precisiontable 2 shows that similar behavior is measured for all other metricsafter all passes have run we take the transitive closure of the generated clusters as the system outputwe present the results of our approach and other relevant prior work in table 3we include in the table all recent systems that report results under the same conditions as our experimental setup and use the same corporawe exclude from this analysis two notable works that report results only on a version of the task that includes finding mentions the haghighi and klein numbers have two variants with semantics and without to measure the contribution of our multipass system we also present results from a singlepass variant of our system that uses all applicable features from the multipass system our sieve model outperforms all systems on two out of the four evaluation corpora on all metricson the corpora where our model is not best it ranks a close secondfor example in ace2004culottatest our system has a b3 f1 score only 4 points lower than bengston and roth and it outperforms all unsupervised approachesin muc6test our sieves b3 f1 score is 18 points lower than haghighi and klein s but it outperforms a supervised system that used gold named entity labelsfinally the multipass architecture always beats the equivalent singlepass system with its contribution ranging between 1 and 4 f1 points depending on the corpus and evaluation metricour approach has the highest precision on all corpora regardless of evaluation metricwe believe this is particularly useful for largescale nlp applications that use coreference resolution components eg question answering or information extractionthese applications can generally function without coreference information so it is beneficial to provide such information only when it is highly precisethe sieve model outperforms all other systems on at least two test sets even though most of the other models are significantly richeramongst the comparisons several are supervised the system of haghighi and klein s uses a lexicon of semanticallycompatible noun pairs acquired transductively ie with knowledge of the mentions in the test setour system does not rely on labeled corpora for training nor access to corpora during testing the system that is closest to ours is haghighi and klein slike us they use a rich set of features and deterministic decisionshowever theirs is a singlepass model with a smaller feature set table 3 shows that on the two corpora where results for this system are available we outperform it considerably on all metricsto understand if the difference is due to the multipass architecture or the richer feature set we compared s against both our multipass system and its singlepass variantthe comparison indicates that both these contributions help our singlepass system outperforms haghighi and klein consistently and the multipass architecture further improves the performance of our singlepass system between 1 and 4 f1 points depending on the corpus and evaluation metricrecent unsupervised coreference work from haghighi and klein included a novel semantic component that matched related head words learned from select wikipedia articlesthey first identified articles relevant to the entity mentions in the test set and then bootstrapped from known syntactic patterns for apposition and predicatenominatives in order to learn a database of related head pairsthey show impressive gains by using these learned pairs in coreference decisionsthis type of learning using test set mentions is often described as transductiveour work instead focuses on an approach that does not require access to the dataset beforehandwe thus did not include a similar semantic component in our system given that running a bootstrapping learner whenever a new data set is encountered is not practical and ultimately reduces the usability of this nlp componenthowever our results show that our sieve algorithm with minimal semantic information still performs as well as the haghighi and klein system with semanticsthe sieve architecture offers benefits beyond improved accuracyits modular design provides a flexibility for features that is not available in most supervised or unsupervised systemsthe sieve allows new features to be seamlessly inserted without affecting the other componentsfor instance once a new high precision feature is inserted as its own stage it will benefit later stages with more precise clusters but it will not interfere with their particular algorithmic decisionsthis flexibility is in sharp contrast to supervised classifiers that require their models to be retrained on labeled data and unsupervised systems that do not offer a clear insertion point for new featuresit can be difficult to fully understand how a system makes a single decision but the sieve allows for flexible usage with minimal efforttable 4 shows the number of incorrect pairwise links generated by our system on the muc6test corpusthe table indicates that most of our errors are for nominal mentionsfor example the combined number of errors for proper or common noun mentions is three times larger than the number of errors made for pronominal mentionsthe table also highlights that most of our errors are recall errorsthere are eight times more recall errors than precision errors in our outputthis is a consequence of our decision to prioritize highly precise features in the sievethe above analysis illustrates that our next effort should focus on improving recallin order to understand the limitations of our current system we randomly selected 60 recall errors and investigated their causesnot surprisingly the causes are unique to each typefor proper nouns 50 of recall errors are due to mention lengthening mentions that are longer than their earlier mentionsfor example washingtonbased usair appears after usair in the text so our head matching components skip it because their high precision depends on disallowing new modifiers as the discourse proceedswhen the mentions were reversed they matchthe common noun recall errors are very different from proper nouns 17 of the 20 random examples can be classified as semantic knowledgethese errors are roughly evenly split between recognizing categories of names and understanding hypernym relations like settlements and agreementspronoun errors come in two formsroughly 40 of these errors are attribute mismatches involving sometimes ambiguous uses of gender and number another 40 are not semantic or attributebased but rather simply arise due to the order in which we check potential antecedentsin all these situations the correct links are missed because the system chooses a closer antecedentthese four highlighted errors add up to 77 of all recall errors in the selected setin general each error type is particular to a specific mention typethis suggests that recall improvements can be made by focusing on one mention type without aversely affecting the othersour sievebased approach to coreference uniquely allows for such new models to be seamlessly insertedwe presented a simple deterministic approach to coreference resolution that incorporates documentlevel information which is typically exploited only by more complex joint learning modelsour sieve architecture applies a battery of deterministic coreference models one at a time from highest to lowest precision where each model builds on the previous models cluster outputdespite its simplicity our approach outperforms or performs comparably to the state of the art on several corporaan additional benefit of the sieve framework is its modularity new features or models can be inserted in the system with limited understanding of the other features already deployedour code is publicly released5 and can be used both as a standalone coreference system and as a platform for the development of future systemsthe strong performance of our system suggests the use of sieves in other nlp tasks for which a variety of very highprecision features can be designed and nonlocal features can be shared likely candidates include relation and event extraction template slot filling and author name deduplicationwe gratefully acknowledge the support of the defense advanced research projects agency machine reading program under air force research laboratory prime contract nofa875009c0181any opinions findings and conclusion or recommendations expressed in this material are those of the author and do not necessarily reflect the view of darpa afrl or the us governmentmany thanks to jenny finkel for writing a reimplementation of much of haghighi and klein which served as the starting point for the work reported herewe also thank nicholas rizzolo and dan roth for helping us replicate their experimental setup and heng ji and dekang lin for providing their gender lexicon
D10-1048
a multipass sieve for coreference resolutionmost coreference resolution models determine if two mentions are coreferent using a single function over a set of constraints or featuresthis approach can lead to incorrect decisions as lower precision features often overwhelm the smaller number of high precision onesto overcome this problem we propose a simple coreference architecture based on a sieve that applies tiers of deterministic coreference models one at a time from highest to lowest precisioneach tier builds on the previous tiers entity cluster outputfurther our model propagates global information by sharing attributes across mentions in the same clusterthis cautious sieve guarantees that stronger features are given precedence over weaker ones and that each decision is made using all of the information available at the timethe framework is highly modular new coreference modules can be plugged in without any change to the other modulesin spite of its simplicity our approach outperforms many stateoftheart supervised and unsupervised models on several standard corporathis suggests that sievebased approaches could be applied to other nlp tasksour rule based model obtains competitive result with less timethe candidate antecedents for the pronoun are ordered based on a notion of discourse salience that favors syntactic salience and document proximitywe develop accurate unsupervised systems that exploit simple but robust linguistic principles
nouns are vectors adjectives are matrices representing adjectivenoun constructions in semantic space we propose an approach to adjectivenoun composition for corpusbased distributional semantics that building on insights from theoretical linguistics represents nouns as vectors and adjectives as datainduced functions over nominal vectors our model significantly outperforms the rivals on the task of reconstructing an vectors not seen in training a small posthoc analysis further suggests that when the modelgenerated an vector is not similar to the corpusobserved an vector this is due to anomalies in the latter we show moreover that our approach provides two novel ways to represent adjective meanings alternative to its representation via corpusbased cooccurrence vectors both outperforming the latter in an adjective clustering task an influential approach for representing the meaning of a word in nlp is to treat it as a vector that codes the pattern of cooccurrence of that word with other expressions in a large corpus of language this approach to semantics naturally captures word clustering scales well to large lexicons and does not require words to be manually disambiguated however until recently it has been limited to the level of content words and it has not tackled in a general way compositionality that crucial property of natural language which allows speakers to derive the meaning of a complex linguistic constituent from the meaning of its immediate syntactic subconstituentsformal semantics the research program stemming from montague has opposite strengths and weaknessesits core semantic notion is the sentence not the word at the lexical level it focuses on the meaning of function words one of its main goals is to formulate recursive compositional rules that derive the quantificational properties of complex sentences and their antecedentpronoun dependenciesgiven its focus on quantification fs treats the meanings of nouns and verbs as pure extensions nouns and verbs are properties and thus denote sets of individualsadjectives are also often assumed to denote properties in this view redadj would be the set of entities which are red plasticadj the set of objects made of plastic and so forthin the simplest case the meaning of an attributive adjectivenoun constituent can be obtained as the intersection of the adjective and noun extensions ann red car red objects n cars however the intersective method of combination is wellknown to fail in many cases for instance a fake gun is not a guneven for red the manner in which the color combines with a noun will be different in red ferrari red watermelon red traffic light these problems have prompted a more flexible fs representation for attributive adjectives functions from the meaning of a noun onto the meaning of a modified noun this mapping could now be sensitive to the particular noun the adjective receives and it does not need to return a subset of the original noun denotation however fs has nothing to say on how these functions should be constructedin the last few years there have been attempts to build compositional models that use distributional semantic representations as inputs most of them focusing on the combination of a verb and its argumentsthis paper addresses instead the combination of nouns and attributive adjectivesthis case was chosen as an interesting testbed because it has the property of recursivity and because very frequent adjectives such as different are at the border between content and function wordsfollowing the insight of fs we treat attributive adjectives as functions over noun meanings however noun meanings are vectors not sets and the functions are learnt from corpusbased nounan vector pairsoriginal contribution we propose and evaluate a new method to derive distributional representations for ans where an adjective is a linear function from a vector to another vector the linear map for a specific adjective is learnt using linear regression from pairs of noun and an vectors extracted from a corpusoutline distributional approaches to compositionality are shortly reviewed in section 2in section 3 we introduce our proposalthe experimental setting is described in section 4section 5 provides some empirical justification for using corpusharvested an vectors as the target of our function learning and evaluation benchmarkin section 6 we show that our model outperforms other approaches at the task of approximating such vectors for unseen ansin section 7 we discuss how adjectival meaning can be represented in our model and evaluate this representation in an adjective clustering tasksection 8 concludes by sketching directions for further workthe literature on compositionality in vectorbased semantics encompasses various related topics some of them not of direct interest here such as how to encode word order information in context vectors or sophisticated composition methods based on tensor products quantum logic etc that have not yet been empirically tested on largescale corpusbased semantic space tasks closer to our current purposes is the general framework for vector composition proposed by mitchell and lapata subsuming various earlier proposalsgiven two vectors you and v they identify two general classes of composition models additive models where a and b are weight matrices and multiplicative models where c is a weight tensor projecting the uv tensor product onto the space of p mitchell and lapata derive two simplified models from these general formstheir simplified additive model p αyou qv was a common approach to composition in the earlier literature typically with the scalar weights set to 1 or to normalizing constants mitchell and lapata also consider a constrained version of the multiplicative approach that reduces to componentwise multiplication where the ith component of the composed vector is given by pi uivithe simplified additive model produces a sort of union of features whereas componentwise multiplication has an intersective effectthey also evaluate a weighted combination of the simplified additive and multiplicative functionsthe best results on the task of paraphrasing nounverb combinations with ambiguous verbs are obtained using the multiplicative approach and by weighted combination of addition and multiplication the multiplicative approach also performs best in a later application to language modeling erk and pado adopt the same formalism but focus on the nature of input vectors suggesting that when a verb is composed with a noun the noun component is given by an average of verbs that the noun is typically object of also focused on composite input vectors within an additive frameworkagain the multiplicative model works best in erk and pados experimentsthe abovementioned researchers do not exploit corpus evidence about the p vectors that result from composition despite the fact that it is straightforward to extract direct distributional evidence about the composite items from the corpus the main innovation of guevara who focuses on adjectivenoun combinations is to use the cooccurrence vectors of observed ans to train a supervised composition model guevara adopts the full additive composition form from equation and he estimates the a and b weights using partial least squares regressionthe training data are pairs of adjectivenoun vector concatenations as input and corpusderived an vectors as outputguevara compares his model to the simplified additive and multiplicative models of mitchell and lapataobserved ans are nearer in the space of observed and predicted test set ans to the ans generated by his model than to those from the alternative approachesthe additive model on the other hand is best in terms of shared neighbor count between observed and predicted ansin our empirical tests we compare our approach to the simplified additive and multiplicative models of mitchell and lapata as well as to guevaras approachas discussed in the introduction we will take adjectives in attributive position to be functions from one noun meaning to anotherto start simple we assume here that adjectives in the attributive position are linear functions from ndimensional vectors onto ndimensional vectors an operation that can be expressed as multiplication of the input noun column vector by a n x n matrix that is our representation for the adjective in the framework of mitchell and lapata our approach derives from the additive form in equation with the matrix multiplying the adjective vector set to 0 pbv where p is the observed an vector b the weight matrix representing the adjective at hand and v a noun vectorin our approach the weight matrix b is specific to a single adjective as we will see in section 7 below it is our representation of the meaning of the adjectivelike guevara we estimate the values in the weight matrix by partial least squares regressionin our case the independent variables for the regression equations are the dimensions of the corpusbased vectors of the component nouns whereas the an vectors provide the dependent variablesunlike guevara we train separate models for each adjective and consequently corpusharvested adjective vectors play no role for us a few considerations are in orderfirst although we use a supervised learning method we do not need handannotated data since the target an vectors are automatically collected from the corpus just like vectors for single words arethus there is no extra external knowledge cost with respect to unsupervised approachessecond our approach rests on the assumption that the corpusderived an vectors are interesting objects that should constitute the target of what a composition process tries to approximatewe provide preliminary empirical support for this assumption in section 5 belowthird we have some reasonable hope that our functions can capture to a certain extent the polysemous nature of adjectives we could learn for example a green matrix with large positive weights mapping from noun features that pertain to concrete objects to color dimensions of the output vector as well as large positive weights from features characterizing certain classes of abstract concepts to politicalsocial dimensions in the output somewhat optimistically we hope that chair will have near0 values on the relevant abstract dimensions like initiative on the concrete features and thus the weights will not interferewe do not evaluate this claim specifically but our quantitative evaluation in section 6 shows that our approach does best with high frequency highly ambiguous adjectivesfourth the approach is naturally syntaxsensitive since we train it on observed data for a specific syntactic position we would train separate linear models for say the same adjective in attributive and predicative positionas a matter of fact the current model is too syntaxsensitive and does not capture similarities across different constructionsfinally although adjective representations are not directly harvested from corpora we can still meaningfully compare adjectives to each other or other words by using their estimated matrix or an average vector for the ans that contain them both options are tested in section 7 belowwe built a large corpus by concatenating the webderived ukwac corpus a mid2009 dump of the english wikipedia and the british national corpus this concatenated corpus tokenized postagged and lemmatized with the treetagger contains about 283 billion tokens the ukwac and wikipedia sections can be freely downloaded with full annotation from the ukwac sitewe performed some of the list extraction and checking operations we are about to describe on a more manageable dataset obtained by selecting the first 100m tokens of ukwac we refer to this subset as the sample corpus belowwe could in principle limit ourselves to collecting vectors for the ans to be analyzed and their componentshowever to make the analysis more challenging and interesting we populate the semantic space where we will look at the behaviour of the ans with a large number of adjectives and nouns as well as further ans not in the test setwe refer to the overall list of items we build semantic vectors for as the extended vocabularywe use a subset of the extended vocabulary containing only nouns and adjectives for feature selection and dimensionality reduction so that we do not implicitly bias the structure of the semantic space by our choice of ansto construct the an test set we first selected 36 adjectives across various classes size denominal colors positive evaluation temporal modal plus some common abstract antonymous pairs we were careful to include intersective cases such as electronic as well as nonintersective adjectives that are almost function words we extracted all nouns that occurred at least 300 times in postadjectival position in the sample corpus excluding some extremely frequent temporal and measure expressions such as time and range for a total of 1420 distinct nounsby crossing the selected adjectives and nouns we constructed a test set containing 26440 ans all attested in the sample corpus the core vocabulary contains the top 8k most frequent noun lemmas and top 4k adjective lemmas from the concatenated corpus the extended vocabulary contains this core plus the 26440 test ans the 16 adjectives and 43 nouns that are components of these ans and that are not in the core set and 2500 more ans randomly sampled from those that are attested in the sample corpus have a noun from the same list used for the test set ans and an adjective that occurred at least 5k times in the sample corpusin total the extended vocabulary contains 40999 entries 8043 nouns 4016 adjectives and 28940 ansfull cooccurrence matrix the 10k lemmas that cooccur with the largest number of items in the core vocabulary constitute the dimensions of our cooccurrence matrixusing the concatenated corpus we extract sentenceinternal cooccurrence counts of all the items in the extended vocabulary with the 10k dimension wordswe then transform the raw counts into local mutual information scores dimensionality reduction since for each test set adjective we need to estimate a regression model for each dimension we want a compact space with relatively few dense dimensionsa natural way to do this is to apply the singular value decomposition to the cooccurrence matrix and represent the items of interest with their coordinates in the space spanned by the first n right singular vectorsapplying svd is independently justified because besides mitigating the dimensionality problem it often improves the quality of the semantic space to avoid bias in favour of dimensions that capture variance in the test set ans we applied svd to the core vocabulary subset of the cooccurrence matrix the core 12k 10k matrix was reduced using svd to a 12k300 matrixthe other row vectors of the full cooccurrence matrix were projected onto the same reduced space by multiplying them by a matrix containing the first n right singular vectors as columnsmerging the items used to compute the svd and those projected onto the resulting space we obtain a 40999300 matrix representing 8043 nouns 4016 adjectives and 28940 ansthis reduced matrix constitutes a realistically sized semantic space that also contains many items that are not part of our test set but will be potential neighbors of the observed and predicted test ans in the experiments to followthe quality of the svd reduction itself was independently validated on a standard similarity judgment dataset obtaining similar pearson correlations of vector cosines and human judgments in both the original and reduced spacesthere are several parameters involved in constructing a semantic space since our current focus is on alternative composition methods evaluated on a shared semantic space exploring parameters pertaining to the construction of the semantic space is not one of our priorities although we cannot of course exclude that the nature of the underlying semantic space affects different composition methods differentlyin the proposed adjectivespecific linear map method an an is generated by multiplying an adjective weight matrix with a noun vectorthe j weights in the ith row of the matrix are the coefficients of a linear regression predicting the values of the ith dimension of the an vector as a linear combination of the j dimensions of the component nounthe linear regression coefficients are estimated separately for each of the 36 tested adjectives from the corpusobserved nounan pairs containing that adjective since we are working in the 300dimensional right singular vector space for each adjective we have 300 regression problems with 300 independent variables and the training data range from about 200 to more than 1k itemswe estimate the coefficients using partial least squares regression as implemented in the r pls package with respect to standard least squares estimation this technique is more robust against overtraining by effectively using a smaller number of orthogonal latent variables as predictors and it exploits the multivariate nature of the problem when determining the latent dimensionsthe number of latent variables to be used in the core regression are a free parameter of plsrfor efficiency reasons we did not optimize itwe picked instead 50 latent variables by the ruleofthumb reasoning that for any adjective we can use at least 200 nounan pairs for training and the independentvariabletotrainingitem ratio will thus never be above 14we adopt a leaveoneout training regime so that each target an is generated by an adjective matrix that was estimated from all the other ans with the same adjective minus the targetwe use plsr with 50 latent variables also for our reimplementation of guevaras single linear map approach in which a single regression matrix is estimated for all ans across adjectivesthe training data in this case are given by the concatenation of the observed adjective and noun vectors coupled with the corresponding an vectors for each target an we randomly sample 2000 other adjectivenounan tuples for training and use the resulting coefficient matrix to generate the an vector from the concatenated target adjective and noun vectorsadditive an vectors are obtained by summing the corresponding adjective and noun vectors after normalizing them multiplicative vectors were obtained by componentwise multiplication of the adjective and noun vectors finally the adj and noun baselines use the adjective and noun vectors respectively as surrogates of the an vectorfor the add mult adj and noun methods we ran the tests of section 6 not only in the svdreduced space but also in the original 10kdimensional cooccurrence spaceonly the mult method achieved better performance in the original spacewe conjecture that this is because the svd dimensions can have negative values leading to counterintuitive results with componentwise multiplication we tried to alleviate this problem by assigning a 0 to composite dimensions where the two input vectors had different signsthe resulting performance was better but still below that of mult in original spacethus in section 6 we report mult results from the full cooccurrence matrix reduced space results for all other methods5 study 1 ans in semantic space the actual distribution of ans in the corpus as recorded by their cooccurrence vectors is fundamental to what we are doingour method relies on the hypothesis that the semantics of an composition does not depend on the independent distribution of adjectives themselves but on how adjectives transform the distribution of nouns as evidenced by observed pairs of nounan vectorsmoreover coherently with this view our evaluation below will be based on how closely the models approximate the observed vectors of unseen ansthat our goal in modeling composition should be to approximate the vectors of observed ans is in a sense almost trivialwhether we synthesize an an for generation or decoding purposes we would want the synthetic an to look as much as possible like a real an in its natural usage contexts and cooccurrence vectors of observed ans are a summary of their usage in actual linguistic contextshowever it might be the case that the specific resources we used for our vector construction procedure are not appropriate so that the specific observed an vectors we extract are not reliable we provide here some preliminary qualitative evidence that this is in general not the case by tapping into our own intuitions on where ans should be located in semantic space and thus on how sensible their neighbors arefirst we computed centroids from normalized svd space vectors of all the ans that share the same adjective we looked at the nearest neighbors of these centroids in semantic space among the 41k items in our extended vocabulary as illustrated for a random sample of 9 centroids in table 1 centroids are positioned in intuitively reasonable areas of the space typically near the adjective itself or the corresponding noun prototypical ans for that adjective elements related to the definition of the adjective and so onamerican n black n easy n am representative black face easy start am territory black hand quick am source black little cost green n historical n mental n green historical mental activity red road hist event mental experience green colour hist content mental energy necessary n nice n young n necessary nice youthful necessary degree good bit young doctor sufficient nice break young staff how about the neighbors of specific anstable 2 reports the nearest 3 neighbors of 9 randomly selected ans involving different adjectives bad electronic historical luck communication map bad elec storage topographical bad weekend elec transmission atlas good spirit purpose hist material important route nice girl little war important transport good girl great war important road big girl major war major road guy small war red cover special collection young husband black cover general collection small son hardback small collection small daughter red label archives mistress the nearest neighbors of the corpusbased an vectors in table 2 make in general intuitive senseimportantly the neighbors pick up the composite meaning rather than that of the adjective or noun alonefor example cover is an ambiguous word but the hardback neighbor relates to its front of a book meaning that is the most natural one in combination with redsimilarly it makes more sense that a young husband would have small sons and daughters we realize that the evidence presented here is of a very preliminary and intuitive natureindeed we will argue in the next section that there are cases in which the corpusderived an vector might not be a good approximation to our semantic intuitions about the an and a modelcomposed an vector is a better semantic surrogateone of the most important avenues for further work will be to come to a better characterization of the behaviour of corpusobserved ans where they work and where the do notstill the neighbors of average and anspecific vectors of tables 1 and 2 suggest that for the bulk of ans such corpusbased cooccurrence vectors are semantically reasonablehaving tentatively established that the sort of vectors we can harvest for ans by directly collecting their corpus cooccurrences are reasonable representations of their composite meaning we move on to the core question of whether it is possible to reconstruct the vector for an unobserved an from information about its componentswe use nearness to the corpusobserved vectors of heldout ans as a very direct way to evaluate the quality of modelgenerated ans since we just saw that the observed ans look reasonable we leave it to further work to assess the quality of the generated ans in an applied setting for example adapting mitchell and lapatas paraphrasing task to anssince the observed vectors look like plausible representations of composite meaning we expect that the closer the modelgenerated vectors are to the observed ones the better they should also perform in any task that requires access to the composite meaning and thus that the results of the current evaluation should correlate with applied performancemore in detail we evaluate here the composition methods by computing for each of them the cosine of the test set an vectors they generate with the 41k vectors representing our extended vocabulary in semantic space and looking at the position of the corresponding observed ans in the cosineranked liststhe lower the rank the better the approximationfor efficiency reasons we flatten out the ranks after the top 1000 neighborsthe results are summarized in table 3 by the median and the other quartiles calculated across all 26440 ans in the test setthese measures are not affected by the cutoff after 1k neighborsto put the reported results into perspective a model with a first quartile rank of 999 does very significantly better than chance our proposed method alm emerges as the best approachthe difference with the second best model add is highly statistically significant neighbors are also sensible moving to the right we see 10 random examples of ans where the observed an was at least 999 neighbors apart from the alm predictionfirst we notice some ans that are difficult to interpret outofcontext second at least subjectively we find that in many cases the nearest neighbor of predicted an is actually more sensible than that of observed an current element for current dimension historical reality for historical thing special thing for special something young image for young photoin the other cases the predicted an neighbor is at least not obviously worse than the observed an neighborthere is a high inverse correlation between the frequency of occurrence of an an and the rank of the observed an with respect to the predicted one suggesting that our model is worse at approximating the observed vectors of rare forms that might in turn be those for which the corpusbased representation is less reliablein these cases dissimilarities between observed and expected vectors rather than signaling problems with the model might indicate that the predicted vector based on a composition function learned from many examples is better than the one directly extracted from the corpusthe examples in the right panel of table 4 bring some preliminary support to this hypothesis to be systematically explored in future workif adjectives are functions and not corpusderived vectors is it still possible to compare them meaningfullywe explore two ways to accomplish this in our framework one is to represent adjectives by the average of the an vectors that contain them and the other to compare them based on the 300300 weight matrices we estimate from nounan pairs we compare the quality of these representations to that of the standard approach in distributional semantics ie representing the adjectives directly with their corpus cooccurrence profile vectors we evaluate performance on the task of clustering those 19 adjectives in our set that can be relatively straightforwardly categorized into general classes comprising a minimum of 4 itemsthe test set built according to these criteria contains 4 classes color positive evaluation time and size we cluster with the cluto toolkit using the repeated bisections with global optimization method accepting all of clutos default values for this choicecluster quality is evaluated by percentage purity if nir is the number of items from the ith true class assigned to the rth cluster n is the total number of items and k the number of clusters then purity n r1rmaxwe calculate i empirical 95 confidence intervals around purity by a heuristic bootstrap procedure based on 10k resamplings of the data set the random baseline distribution is obtained by 10k random assignments of adjectives to the clusters under the constraint that no cluster is emptytable 5 shows that all methods are significantly better than chanceour two indirect representations achieve similar performance and they are better than the traditional method based on adjective cooccurrence vectorswe conclude that although our approach does not provide a direct encoding of adjective meaning in terms of such independently collected vectors it does have meaningful ways to represent their semantic propertiesthe work we reported constitutes an encouraging start for our approach to modeling compositionwe suggested along the way various directions for further studieswe consider the following issues to be the most pressing oneswe currently train each adjectivespecific model separately we should explore hierarchical modeling approaches that exploit similarities across adjectives to estimate better modelsevaluationwise the differences between observed and predicted ans must be analyzed more extensively to support the claim that when their vectors differ modelbased prediction improves on the observed vectorevaluation in a more applied task should also be pursued in particular we will design a paraphrasing task similar to the one proposed by mitchell and lapata to evaluate nounverb constructionssince we do not collect vectors for the functor component of a composition process our approach naturally extends to processes that involve bound morphemes such as affixation where we would not need to collect independent cooccurrence information for the affixesfor example to account for re prefixation we do not need to collect a re vector but simply vectors for a set of vrev pairs where both members of the pairs are words our approach can also deal outofthebox with recursive constructions and can be easily extended to more abstract constructions such as determiner n still we need to design a good testing scenario to evaluate the quality of such modelgenerated constructionsultimately we want to compose larger and larger constituents up to full sentencesit remains to be seen if the approach we proposed will scale up to such challengeswe thank gemma boleda emilano guevara alessandro lenci louise mcnally and the anonymous reviewers for useful information advice and comments
D10-1115
nouns are vectors adjectives are matrices representing adjectivenoun constructions in semantic spacewe propose an approach to adjectivenoun composition for corpusbased distributional semantics that building on insights from theoretical linguistics represents nouns as vectors and adjectives as datainduced functions over nominal vectorsour model significantly outperforms the rivals on the task of reconstructing an vectors not seen in traininga small posthoc analysis further suggests that when the modelgenerated an vector is not similar to the corpusobserved an vector this is due to anomalies in the latterwe show moreover that our approach provides two novel ways to represent adjective meanings alternative to its representation via corpusbased cooccurrence vectors both outperforming the latter in an adjective clustering taskwe find that the mult method can be expected to perform better in the original non reduced semantic space because the svd dimensions can have negative values leading to counter intuitive results with componentwise multiplication the adjectivespecific linear map model performed far better than add and mult in approximating the correct vectors for unseen ans while on this task add and mult work better while alm is successful only in the more sophisticated measure of neighbor density
inducing probabilistic ccg grammars from logical form with higherorder unification this paper addresses the problem of learning to map sentences to logical form given training data consisting of natural language sentences paired with logical representations of their meaning previous approaches have been designed for particular natural languages or specific meaning representations here we present a more general method the approach induces a probabilistic ccg grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentences we use higherorder unification to define a hypothesis space containing all grammars consistent with the training data and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a loglinear parsing model experiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representations a key aim in natural language processing is to learn a mapping from natural language sentences to formal representations of their meaningrecent work has addressed this problem by learning semantic parsers given sentences paired with logical meaning representations for example the training data might consist of english sentences paired with lambdacalculus meaning representations given pairs like this the goal is to learn to map new unseen sentences to their corresponding meaningprevious approaches to this problem have been tailored to specific natural languages specific meaning representations or bothhere we develop an approach that can learn to map any natural language to a wide variety of logical representations of linguistic meaningin addition to data like the above this approach can also learn from examples such as sentence hangi eyaletin texas ye siniri vardir meaning answer where the sentence is in turkish and the meaning representation is a variablefree logical expression of the type that has been used in recent work the reason for generalizing to multiple languages is obviousthe need to learn over multiple representations arises from the fact that there is no standard representation for logical form for natural languageinstead existing representations are ad hoc tailored to the application of interestfor example the variablefree representation above was designed for building natural language interfaces to databasesour approach works by inducing a combinatory categorial grammar a ccg grammar consists of a languagespecific lexicon whose entries pair individual words and phrases with both syntactic and semantic information and a universal set of combinatory rules that project that lexicon onto the sentences and meanings of the language via syntactic derivationsthe learning process starts by postulating for each sentence in the training data a single multiword lexical item pairing that sentence with its complete logical formthese entries are iteratively refined with a restricted higherorder unification procedure that defines all possible ways to subdivide them consistent with the requirement that each training sentence can still be parsed to yield its labeled meaningfor the data sets we consider the space of possible grammars is too large to explicitly enumeratethe induced grammar is also typically highly ambiguous producing a large number of possible analyses for each sentenceour approach discriminates between analyses using a loglinear ccg parsing model similar to those used in previous work but differing in that the syntactic parses are treated as a hidden variable during training following the approach of zettlemoyer collins we present an algorithm that incrementally learns the parameters of this model while simultaneously exploring the space of possible grammarsthe model is used to guide the process of grammar refinement during training as well as providing a metric for selecting the best analysis for each new sentencewe evaluate the approach on benchmark datasets from a natural language interface to a database of us geography we show that accurate models can be learned for multiple languages with both the variablefree and lambdacalculus meaning representations introduced abovewe also compare performance to previous methods which are designed with either language or representation specific constraints that limit generalization as discussed in more detail in section 6despite being the only approach that is general enough to run on all of the data sets our algorithm achieves similar performance to the others even outperforming them in several casesthe goal of our algorithm is to find a function f x z that maps sentences x to logical expressions zwe learn this function by inducing a probabilistic ccg grammar from a training set i 1 n containing example pairs such as the induced grammar consists of two components which the algorithm must learn tion over the possible parses y conditioned on the sentence xwe will present the approach in two partsthe lexical induction process uses a restricted form of higher order unification along with the ccg combinatory rules to propose new entries for athe complete learning algorithm integrates this lexical induction with a parameter estimation scheme that learns 0before presenting the details we first review necessary backgroundthis section provides an introduction to the ways in which we will use lambda calculus and higherorder unification to construct meaning representationsit also reviews the ccg grammar formalism and probabilistic extensions to it including existing parsing and parameter estimation techniqueswe assume that sentence meanings are represented as logical expressions which we will construct from the meaning of individual words by using the operations defined in the lambda calculuswe use a version of the typed lambda calculus in which the basic types include e for entities t for truth values and i for numbersthere are also function types of the form that are assigned to lambda expressions such as axstate which take entities and return truth valueswe represent the meaning of words and phrases using lambdacalculus expressions that can contain constants quantifiers logical connectors and lambda abstractionsthe advantage of using the lambda calculus lies in its generalitythe meanings of individual words and phrases can be arbitrary lambda expressions while the final meaning for a sentence can take different formsit can be a full lambdacalculus expression a variablefree expression such as answer or any other logical expression that can be built from the primitive meanings via function application and compositionthe higherorder unification problem involves finding a substitution for the free variables in a pair of lambdacalculus expressions that when applied makes the expressions equal each otherthis problem is notoriously complex in the unrestricted form it is undecidablein this paper we will guide the grammar induction process using a restricted version of higherorder unification that is tractablefor a given expression h we will need to find expressions for f and g such that either h f or h axfthis limited form of the unification problem will allow us to define the ways to split h into subparts that can be recombined with ccg parsing operations which we will define in the next section to reconstruct h ccg is a linguistic formalism that tightly couples syntax and semantics and can be used to model a wide range of language phenomenafor present purposes a ccg grammar includes a lexicon a with entries like the following where each lexical item w x h has words w a syntactic category x and a logical form h expressed as a lambdacalculus expressionfor the first example these are new york np and nyccg syntactic categories may be atomic or complex ccg combines categories using a set of combinatory rulesfor example the forward and these rules apply to build syntactic and semantic derivations under the control of the word order information encoded in the slash directions of the lexical entriesfor example given the lexicon above the sentence new york borders vermont can be parsed to produce where each step in the parse is labeled with the combinatory rule and backward could be used to combine the category s with any of snp snp or snpfigure 1 shows two parses where the composition combinators and vertical slashes are usedthese parses closely resemble the types of analyses that will be possible under the grammars we learn in the experiments described in section 8given a ccg lexicon a there will in general be many possible parses for each sentencewe select the most likely alternative using a loglinear model which consists of a feature vector 0 and a parameter vector 0the joint probability of a logical form z constructed with a parse y given a sentence x is section 7 defines the features used in the experiments which include for example lexical features that indicate when specific lexical items in a are used in the parse yfor parsing and parameter estimation we use standard algorithms as described belowthe parsing or inference problem is to find the most likely logical form z given a sentence x assuming the parameters 0 and lexicon a are known where the probability of the logical form is found by summing over all parses that produce it in this approach the distribution over parse trees y is modeled as a hidden variablethe sum over parses in eq3 can be calculated efficiently using the insideoutside algorithm with a ckystyle parsing algorithmto estimate the parameters themselves we use stochastic gradient updates given a set of n sentencemeaning pairs i 1n we update the parameters 0 iteratively for each example i by following the local gradient of the conditional loglikelihood objective oi log pthe local gradient of the individual parameter 0j associated with feature oj and training instance is given by as with eq3 all of the expectations in eq4 are calculated through the use of the insideoutside algorithm on a pruned parse chartin the experiments each chart cell was pruned to the top 200 entriesbefore presenting a complete learning algorithm we first describe how to use higherorder unification to define a procedure for splitting ccg lexical entriesthis splitting process is used to expand the lexicon during learningwe seed the lexical induction with a multiword lexical item xiszi for each training example consisting of the entire sentence xi and its associated meaning representation zifor example one initial lexical item might be although these initial sentential lexical items can parse the training data they will not generalize well to unseen datato learn effectively we will need to split overly specific entries of this type into pairs of new smaller entries that generalize betterfor example one possible split of the lexical entry given in would be the pair new york borders snp axnext to vermont np vt where we broke the original logical expression into two new ones axnext to and vt and paired them with syntactic categories that allow the new lexical entries to be recombined to produce the original analysisthe next three subsections define the set of possible splits for any given lexical itemthe process is driven by solving a higherorder unification problem that defines all of the ways of splitting the logical expression into two parts as described in section 41section 42 describes how to construct syntactic categories that are consistent with the two new fragments of logical form and which will allow the new lexical items to recombinefinally section 43 defines the full set of lexical entry pairs that can be created by splitting a lexical entryas we will see this splitting process is overly prolific for any single language and will yield many lexical items that do not generalize wellfor example there is nothing in our original lexical entry above that provides evidence that the split should pair vermont with the constant vt and not axnext tosection 5 describes how we estimate the parameters of a probabilistic parsing model and how this parsing model can be used to guide the selection of items to add to the lexiconthe set of possible splits for a logical expression h is defined as the solution to a pair of higherorder unification problemswe find pairs of logical expressions such that either f h or axf h solving these problems creates new expressions f and g that can be recombined according to the ccg combinators as defined in section 32 to produce h in the unrestricted case there can be infinitely many solution pairs for a given expression h for example when h tex and f axtex the expression g can be anythingalthough it would be simple enough to forbid vacuous variables in f and g the number of solutions would still be exponential in the size of h for example when h contains a conjunction such as h axcity n major n in any subset of the expressions in the conjunction can be assigned to f to limit the number of possible splits we enforce the following restrictions on the possible higherorder solutions that will be used during learning together these three restrictions guarantee that the number of splits is in the worst case an ndegree polynomial of the number of constants in h the constraints were designed to increase the efficiency of the splitting algorithm without impacting performance on the development datawe define the set of possible splits for a category xh with syntax x and logical form h by enumerating the solution pairs to the higherorder unification problems defined above and creating syntactic categories for the resulting expressionsfor example given x h snp axin f ayaxin and g tex we would produce the following two pairs of new categories which were constructed by first choosing the syntactic category for g in this case np and then enumerating the possible directions for the new slash in the category containing f we consider each of these two steps in more detail belowthe new syntactic category for g is determined based on its type tfor example t e and t then the function qt takes an input type t and returns the syntactic category of t as follows the basic types e and t are assigned syntactic categories np and s and all functional types are assigned categories recursivelyfor example q snp and q snpnpthis definition of ccg categories is unconventional in that it never assigns atomic categories to functional typesfor example there is no distinct syntactic category n for nouns instead the more complex category snp is usednow we are ready to define the set of all category splitsfor a category a xh we can define which is a union of sets each of which includes splits for a single ccg operatorfor example fa is the set of category pairs where each pair can be combined with the forward application combinator described in section 32 to reconstruct xh the remaining three sets are defined similarly and are associated with the backward application and forward and backward composition operators respectively where the composition sets fc and because only accept input categories with the appropriate outermost slash direction for example fcwe can now define the lexical splits that will be used during learningfor lexical entry w0n a with word sequence w0n hw0 wni and ccg category a define the set sl of splits to be where we enumerate all ways of splitting the words sequence w0n and aligning the subsequences with categories in sc as defined in the last sectionthe previous section described how a splitting procedure can be used to break apart overly specific lexical items into smaller ones that may generalize better to unseen datathe space of possible lexical items supported by this splitting procedure is too large to explicitly enumerateinstead we learn the parameters of a pccg which is used both to guide the splitting process and also to select the best parse given a learned lexiconfigure 2 presents the unificationbased learning algorithm ublthis algorithm steps through the data incrementally and performs two steps for each training examplefirst new lexical items are induced for the training instance by splitting and merging nodes in the best correct parse given the current parametersnext the parameters of the pccg are updated by making a stochastic gradient update on the marginal likelihood given the updated lexiconinputs and initialization the algorithm takes as input the training set of n pairs i 1n along with an np list anp of proper noun lexical items such as texas nptexthe lexicon a is initialized with a single lexical item xi s zi for each of the training pairs along with the contents of the np listit is possible to run the algorithm without the initial np list we include it to allow direct comparisons with previous approaches which also included np listsfeatures and initial feature weights are described in section 7step 1 updating the lexicon in the lexical update step the algorithm first computes the best correct parse tree y for the current training example and then uses y as input to the procedure newlex which determines which new lexical items to add to a newlex begins by enumerating all pairs for i j where c is a category occurring at a node in y and wij are the words it spansfor example in the left parse in figure 1 there would be four pairs one with the category c npnpaxborder and the phrase wij ye siniri vardir and one for each nonleaf node in the treefor each pair newlex considers introducing a new lexical item wij c which allows for the possibility of a parse where the subtree rooted at c is replaced with this new entrynewlex also considers adding each pair of new lexical items that is obtained by splitting wijc as described in section 4 thereby considering many different ways of reanalyzing the nodethis process creates a set of possible new lexicons where each lexicon expands a in a different way by adding the items from either a single split or a single merge of a node in yfor each potential new lexicon a newlex computes the probability p of the original parse y under a and parameters b that are the same as b but have weights for the new lexical items as described in section 7it also finds the best new parse y arg maxy p1 finally newlex selects the a with the largest difference in log probability between y and y and returns the new entries in aif y is the best parse for every a newlex returns the empty set the lexicon will not changestep 2 parameter updates for each training example we update the parameters b using the stochastic gradient updates given by eq4discussion the alternation between refining the lexicon and updating the parameters drives the learning processthe initial model assigns a conditional likelihood of one to each training example although the splitting step often decreases the probability of the data the new entries it produces are less specific and should generalize bettersince we initially assign positive weights to the parameters for new lexical items the overall approach prefers splitting trees with many lexical items will initially be much more likelyhowever if the learned lexical items are used in too many incorrect parses the stochastic gradient updates will down weight them to the point where the lexical induction step can merge or resplit nodes in the trees that contain themthis allows the approach to correct the lexicon and hopefully improve future performanceprevious work has focused on a variety of different meaning representationsseveral approaches have been designed for the variablefree logical representations shown in examples throughout this paperfor example kate mooney present a method that extends an existing svm learning algorithm to recover logical representationsthe 1this computation can be performed efficiently by incrementally updating the parse chart used to find yinputs training set i 1 n where each example is a sentence xi paired with a logical form ziset of np lexical items anpnumber of iterations t learning rate parameter α0 and cooling rate parameter c definitions the function newlex takes a parse y and returns a set of new lexical items found by splitting and merging categories in y as described in section 5the distributions p and p are defined by the loglinear model as described in section 33initialization wasp system uses statistical machine translation techniques to learn synchronous context free grammars containing both words and logiclu et al developed a generative model that builds a single hybrid tree of words syntax and meaning representationthese algorithms are all language independent but representation specificother algorithms have been designed to recover lambdacalculus representationsfor example wong mooney developed a variant of wasp specifically designed for this alternate representationzettlemoyer collins developed ccg grammar induction techniques where lexical items are proposed according to a set of handengineered lexical templatesour approach eliminates this need for manual effortanother line of work has focused on recovering meaning representations that are not based on logicexamples include an early statistical method for learning to fill slotvalue representations and a more recent approach for recovering semantic parse trees exploring the extent to which these representations are compatible with the logicbased learning approach we developed is an important area for future workfinally there is work on using categorial grammars to solve other related learning problemsfor example buszkowski penn describe a unificationbased approach for grammar discovery from bracketed natural language sentences and villavicencio developed an approach for modeling child language acquisitionadditionally bos et al consider the challenging problem of constructing broadcoverage semantic representations with ccg but do not learn the lexiconfeatures we use two types of features in our modelfirst we include a set of lexical features for each lexical item l e a we include a feature old that fires when l is usedsecond we include semantic features that are functions of the output logical expression zeach time a predicate p in z takes an argument a with type t in position i it triggers two binary indicator features o for the predicateargument relation and oi for the predicate argumenttype relationinitialization the weights for the semantic features are initialized to zerothe weights for the lexical features are initialized according to coocurrance statistics estimated with the giza implementation of ibm model 1we compute translation scores for pairs that cooccur in examples in the training datathe initial weight for each old is set to ten times the average score over the pairs in l except for the weights of seed lexical entries in anp which are set to 10 we used the learning rate α0 10 and cooling rate c 105 in all training scenarios and ran the algorithm for t 20 iterationsthese values were selected with cross validation on the geo880 development set described belowdata and evaluation we evaluate our system on the geoquery datasets which contain naturallanguage queries of a geographical database paired with logical representations of each querys meaningthe full geo880 dataset contains 880 pairs which we split into a development set of 600 pairs and a test set of 280 pairs following zettlemoyer collins the geo250 dataset is a subset of geo880 containing 250 sentences that have been translated into turkish spanish and japanese as well as the original englishdue to the small size of this dataset we use 10fold cross validation for evaluationwe use the same folds as wong mooney and lu et al allowing a direct comparisonthe geoquery data is annotated with both lambdacalculus and variablefree meaning representations which we have seen examples of throughout the paperwe report results for both representations using the standard measures of recall precision and f1 twopass parsing to investigate the tradeoff between precision and recall we report results with a twopass parsing strategywhen the parser fails to return an analysis for a test sentence due to novel words or usage we reparse the sentence and allow the parser to skip words with a fixed costskipping words can potentially increase recall if the ignored word is an unknown function word that does not contribute semantic contenttables 1 2 and 3 present the results for all of the experimentsin aggregate they demonstrate that our algorithm ubl learns accurate models across languages and for both meaning representationsthis is a new result no previous system is as generalwe also see the expected tradeoff between precision and recall that comes from the twopass parsing approach which is labeled ubls with the ability to skip words ubls achieves the highest recall of all reported systems for all evaluation conditionshowever ubl achieves much higher precision and better overall f1 scores which are generally comparable to the best performing systemsthe comparison to the ccg induction techniques of zc05 and zc07 is particularly strikingthese approaches used languagespecific templates to propose new lexical items and also required as input a set of handengineered lexical entries to model phenomena such as quantification and determinershowever the use of higherorder unification allows ubl to achieve comparable performance while automatically inducing these types of entriesfor a more qualitative evaluation table 4 shows a selection of lexical items learned with high weights for the lambdacalculus meaning representationsnouns such as state or estado are consistently learned across languages with the category snp which stands in for the more conventional n the algorithm also learns languagespecific constructions such as the japanese case markers no and wa which are treated as modifiers that do not add semantic contentlanguagespecific word order is also encoded using the slash directions of the ccg categoriesfor example what and que take their arguments to the right in the whinitial english and spanishhowever the turkish whword nelerdir and the japanese question marker nan desu ka are sentence final and therefore take their arguments to the leftlearning regularities of this type allows ubl to generalize well to unseen datathere is less variation and complexity in the learned lexical items for the variablefree representationthe fact that the meaning representation is deeply nested influences the form of the induced grammarfor example recall that the sentence what states border texas would be paired with the meaning answerfor this representation lexical items such as can be used to construct the desired outputin practice ubl often learns entries with only a single slash like those above varying only in the direction as required for the languageeven the more complex items such as those for quantifiers are consistently simpler than those induced from the lambdacalculus meaning representationsfor example one of the most complex entries learned in the experiments for english is the smallest npnpafaxsmallest onethere are also differences in the aggregate statistics of the learned lexiconsfor example the average length of a learned lexical item for the meaning representations is for turkish for english for spanish and for japanesefor both meaning representations the model learns significantly more multiword lexical items for the somewhat analytic japanese than the agglutinative turkishthere are also variations in the average number of learned lexical items in the best parses during the final pass of training 192 for japanese 206 for spanish 188 for english and 295 for turkishas compared to the other languages the morpologically rich turkish requires significantly more lexical variation to explain the datafinally there are a number of cases where the ubl algorithm could be improved in future workin cases where there are multiple allowable word orders the ubl algorithm must learn individual entries for each possibilityfor example the following two categories are often learned with high weight for the japanese word chiisai and are treated as distinct entries in the lexiconsimilarly the approach presented here does not model morphology and must repeatedly learn the correct categories for the turkish words nehri nehir nehirler and nehirlerin all of which correspond to the logical form axriverthis paper has presented a method for inducing probabilistic ccgs from sentences paired with logical formsthe approach uses higherorder unification to define the space of possible grammars in a language and representationindependent manner paired with an algorithm that learns a probabilistic parsing modelwe evaluated the approach on four languages with two meaning representations each achieving high accuracy across all scenariosfor future work we are interested in exploring the generality of the approach while extending it to new understanding problemsone potential limitation is in the constraints we introduced to ensure the tractability of the higherorder unification procedurethese restrictions will not allow the approach to induce lexical items that would be used with among other things many of the typeraised combinators commonly employed in ccg grammarswe are also interested in developing similar grammar induction techniques for contextdependent understanding problems such as the one considered by zettlemoyer collins such an approach would complement ideas for using highorder unification to model a wider range of language phenomena such as vp ellipsis we thank the reviewers for useful feedbackthis work was supported by the eu under ist cognitive systems grant ip fp62004ist427657 pacoplus and erc advanced fellowship 249520 gramplus to steedmankwiatkowski was supported by an eprsc studentshipzettlemoyer was supported by a us nsf international research fellowship
D10-1119
inducing probabilistic ccg grammars from logical form with higherorder unificationthis paper addresses the problem of learning to map sentences to logical form given training data consisting of natural language sentences paired with logical representations of their meaningprevious approaches have been designed for particular natural languages or specific meaning representations here we present a more general methodthe approach induces a probabilistic ccg grammar that represents the meaning of individual words and defines how these meanings can be combined to analyze complete sentenceswe use higherorder unification to define a hypothesis space containing all grammars consistent with the training data and develop an online learning algorithm that efficiently searches this space while simultaneously estimating the parameters of a loglinear parsing modelexperiments demonstrate high accuracy on benchmark data sets in four languages with two different meaning representationswe present an approach for languageindependent learning that replaces the handspecified templates with a higherorderunificationbased lexical induction methodwe initialise lexical weights in their learning algorithm using corpuswide alignment statistics across words and meaning elements
using universal linguistic knowledge to guide grammar induction we present an approach to grammar induction that utilizes syntactic universals to improve dependency parsing across a range of languages our method uses a single set of manuallyspecified languageindependent rules that identify syntactic dependencies between pairs of syntactic categories that commonly occur across languages during inference of the probabilistic model we use posterior expectation constraints to require that a minimum proportion of the dependencies we infer be instances of these rules we also automatically refine the syntactic categories given in our coarsely tagged input across six languages our approach outperforms stateoftheart unsupervised methods by a significant mar despite surface differences human languages exhibit striking similarities in many fundamental aspects of syntactic structurethese structural correspondences referred to as syntactic universals have been extensively studied in linguistics and underlie many approaches in multilingual parsingin fact much recent work has demonstrated that learning crosslingual correspondences from corpus data greatly reduces the ambiguity inherent in syntactic analysis in this paper we present an alternative grammar induction approach that exploits these structural correspondences by declaratively encoding a small set of universal dependency rulesas input to the model we assume a corpus annotated with coarse syntactic categories and a set of universal rules defined over these categories such as those in table 1these rules incorporate the definitional properties of syntactic categories in terms of their interdependencies and thus are universal across languagesthey can potentially help disambiguate structural ambiguities that are difficult to learn from data alone for example our rules prefer analyses in which verbs are dependents of auxiliaries even though analyzing auxiliaries as dependents of verbs is also consistent with the dataleveraging these universal rules has the potential to improve parsing performance for a large number of human languages this is particularly relevant to the processing of lowresource languagesfurthermore these universal rules are compact and wellunderstood making them easy to manually constructin addition to these universal dependencies each specific language typically possesses its own idiosyncratic set of dependencieswe address this challenge by requiring the universal constraints to only hold in expectation rather than absolutely ie we permit a certain number of violations of the constraintswe formulate a generative bayesian model that explains the observed data while accounting for declarative linguistic rules during inferencethese rules are used as expectation constraints on the posterior distribution over dependency structuresthis approach is based on the posterior regularization technique which we apply to a variational inference algorithm for our parsing modelour model can also optionally refine common highlevel syntactic categories into perlanguage categories by inducing a clustering of words using dirichlet processes since the universals guide induction toward linguistically plausible structures automatic refinement becomes feasible even in the absence of manually annotated syntactic treeswe test the effectiveness of our grammar induction model on six indoeuropean languages from three language groups english danish portuguese slovene spanish and swedishthough these languages share a highlevel indoeuropean ancestry they cover a diverse range of syntactic phenomenonour results demonstrate that universal rules greatly improve the accuracy of dependency parsing across all of these languages outperforming current stateoftheart unsupervised grammar induction methods learning with linguistic constraints our work is situated within a broader class of unsupervised approaches that employ declarative knowledge to improve learning of linguistic structure the way we apply constraints is closest to the latter two approaches of posterior regularization and generalized expectation criteriain the posterior regularization framework constraints are expressed in the form of expectations on posteriors this design enables the model to reflect constraints that are difficult to encode via the model structure or as priors on its parametersin their approach parameters are estimated using a modified them algorithm where the estep minimizes the kldivergence between the model posterior and the set of distributions that satisfies the constraintsour approach also expresses constraints as expectations on the posterior we utilize the machinery of their framework within a variational inference algorithm with a mean field approximationgeneralized expectation criteria another technique for declaratively specifying expectation constraints has previously been successfully applied to the task of dependency parsing this objective expresses constraints in the form of preferences over model expectationsthe objective is penalized by the square distance between model expectations and the prespecified values of the expectationthis approach yields significant gains compared to a fully unsupervised counterpartthe constraints they studied are corpus and languagespecificour work demonstrates that a small set of languageindependent universals can also serve as effective constraintsfurthermore we find that our method outperforms the generalized expectation approach using corpusspecific constraintslearning to refine syntactic categories recent research has demonstrated the usefulness of automatically refining the granularity of syntactic categorieswhile most of the existing approaches are implemented in the supervised setting liang et al propose a nonparametric bayesian model that learns the granularity of pcfg categories in an unsupervised fashionfor each nonterminal grammar symbol the model posits a hierarchical dirichlet process over its refinements to automatically learn the granularity of syntactic categoriesas with their work we also use nonparametric priors for category refinement and employ variational methods for inferencehowever our goal is to apply category refinement to dependency parsing rather than to pcfgs requiring a substantially different model formulationwhile liang et al demonstrated empirical gains on a synthetic corpus our experiments focus on unsupervised category refinement on real language datauniversal rules in nlp despite the recent surge of interest in multilingual learning there is surprisingly little computational work on linguistic universalson the acquisition side daume iii and campbell proposed a computational technique for discovering universal implications in typological featuresmore closely related to our work is the position paper by bender which advocates the use of manuallyencoded crosslingual generalizations for the development of nlp systemsshe argues that a system employing such knowledge could be easily adapted to a particular language by specializing this high level knowledge based on the typological features of the languagewe also argue that crosslanguage universals are beneficial for automatic language processing however our focus is on learning languagespecific adaptations of these rules from datathe central hypothesis of this work is that unsupervised dependency grammar induction can be improved using universal linguistic knowledgetoward this end our approach is comprised of two components a probabilistic model that explains how sentences are generated from latent dependency structures and a technique for incorporating declarative rules into the inference processwe first describe the generative story in this section before turning to how constraints are applied during inference in section 4our model takes as input a set of sentences where each word is annotated with a coarse partofspeech tagtable 2 provides a detailed technical description of our models generative process and figure 1 presents a model diagramfor each observed coarse symbol s iifor each child symbol s adraw secondlevel infinite multinomial over subsymbols πs0szc dpfor each tree node i generated in context c by parent symbol s and parent subsymbol z and parsesin the above gem dp dir and mult refer respectively to the stick breaking distribution dirichlet process dirichlet distribution and multinomial distributiongenerating symbols and words we describe how a single node of the tree is generated before discussing how the entire tree structure is formedeach node of the dependency tree is comprised of three random variables an observed coarse symbol s a hidden refined subsymbol z and an observed word xin the following let the parent of the current node have symbol s and subsymbol z the root node is generated from separate rootspecific distributionssubsymbol refinement is an optional component of the full model and can be omitted by deterministically equating s and zas we explain at the end of this section without this aspect the generative story closely resembles the classic dependency model with valence of klein and manning first we draw symbol s from a finite multinomial distribution with parameters θs0z0cas the indices indicate we have one such set of multinomial parameters for every combination of parent symbol s and subsymbol z along with a context c here the context of the current node can take one of six values corresponding to every combination of direction and valence with respect to its parentthe prior for each θs0z0c is a symmetric dirichlet with hyperparameter θ0next we draw the refined syntactic category subsymbol z from an infinite multinomial with parameters πss0z0chere the selection of π is indexed by the current nodes coarse symbol s the symbol s and subsymbol z of the parent node and the context c of the current nodefor each unique coarse symbol s we tie together the distributions πss0z0c for all possible parent and context combinations using a hierarchical dirichlet process specifically for a single s each distribution πss0z0c over subsymbols is drawn from a dp with concentration parameter α and base distribution βs over subsymbolsthis base distribution βs is itself drawn from a gem prior with concentration parameter γby formulating the generation of z as an hdp we can share parameters for a single coarse symbols subsymbol distribution while allowing for individual variability based on node parent and contextnote that parameters are not shared across different coarse symbols preserving the distinctions expressed via the coarse tag annotationsfinally we generate the word x from a finite multinomial with parameters φsz where s and z are the symbol and subsymbol of the current nodethe φ distributions are drawn from a symmetric dirichlet priorgenerating the tree structure we now consider how the structure of the tree ariseswe follow an approach similar to the widelyreferenced dmv model which forms the basis of the current stateoftheart unsupervised grammar induction model after a node is drawn we generate children on each side until we produce a designated stop symbolwe encode more detailed valence information than klein and manning and condition child generation on parent valencespecifically after drawing a node we first decide whether to proceed to generate a child or to stop conditioned on the parent symbol and subsymbol and the current context if we decide to generate a child we follow the previously described process for constructing a nodewe can combine the stopping decision with the generation of the child symbol by including a distinguished stop symbol as a possible outcome in distribution θ nosplit model variant in the absence of subsymbol refinement our model simplifies in some respectsin particular the hdp generation of z is obviated and word x is drawn from a word distribution 0s indexed solely by coarse symbol s the resulting simplified model closely resembles dmv except that it 1 explicitly generate words x rather than only partofspeech tags s 2 encodes richer context and valence information and 3 imposes a dirichlet prior on the symbol distribution bwe now describe how to augment our generative model of dependency structure with constraints derived from linguistic knowledgeincorporating arbitrary linguistic rules directly in the generative story is challenging as it requires careful tuning of either the model structure or priors for each constraintinstead following the approach of graca et al we constrain the posterior to satisfy the rules in expectation during inferencethis effectively biases the inference toward linguistically plausible settingsin standard variational inference an intractable true posterior is approximated by a distribution from a tractable set this tractable set typically makes stronger independence assumptions between model parameters than the model itselfto incorporate the constraints we further restrict the set to only include distributions that satisfy the specified expectation constraints over hidden variablesin general for some given model let b denote the entire set of model parameters and z and x denote the hidden structure and observations respectivelywe are interested in estimating the posterior pvariational inference transforms this problem into an optimization problem where we try to find a distribution q from a restricted set q that minimizes the kldivergence between q and p kl k p thus f is a lower bound on likelihoodmaximizing this lower bound is equivalent to minimizing the kldivergence between p and qto make this maximization tractable we make a mean field assumption that q belongs to a set q of distributions that factorize as follows we further constrain q to be from the subset of q that satisfies the expectation constraint eqf b where f is a deterministically computable function of the hidden structuresin our model for example f counts the dependency edges that are an instance of one of the declaratively specified dependency rules while b is the proportion of the total dependencies that we expect should fulfill this constraint2 with the mean field factorization and the expectation constraints in place solving the maximization of f in separately for each factor yields the following updates where we can solve by setting q to q since q is held fixed while updating q the expectation function of the constraint remains constant during this updateas shown by graca et al the update in is a constrained optimization problem and can be solved by performing gradient search on its dual for a fixed value of a the optimal q q expby updating q and q as in and we are effectively maximizing the lower bound f 2constraints of the form e9f b are easily imposed by negating f and bwe now derive the specific variational updates for our dependency induction modelfirst we assume the following meanfield factorization of our variational distribution the only factor affected by the expectation constraints is qrecall from the previous section that the update for q is performed via gradient search on the dual of a constrained minimization problem of the form where s0 varies over the set of unique symbols in the observed tags z0 denotes subsymbols for each symbol c varies over context values comprising a pair of direction and valence values and s corresponds to child symbolswe restrict q and q to be dirichlet distributions and q to be multinomialas with prior work we assume a degenerate q δ0 for tractability reasons ie all mass is concentrated on some single βwe also assume that the top level stickbreaking distribution is truncated at t ie q assigns zero probability to integers greater than t because of the truncation of β we can approximate q with an asymmetric finite dimensional dirichletthe factors are updated one at a time holding all other factors fixedthe variational update for q is given by where term eqcss0z0c is the expected count wrt q of child symbol s and subsymbol z in context c when generated by parent symbol s0 and subsymbol z0similarly the updates for q and q are given by where cs0z0c is the count of child symbol s being generated by the parent symbol s0 and subsymbol z0 in context c and cs0z0x is the count of word x being generated by symbol s0 and subsymbol z0 where n is the total number of sentences len is the length of sentence n and index h refers to the head of the jth node of sentence n given this q0 a gradient search is performed using to find the optimal λ and thus the primal solution for updating qfinally we update the degenerate factor q with the projected gradient search algorithm used by liang et al universal dependency rules we compile a set of 13 universal dependency rules consistent with various linguistic accounts shown in table 1these rules are defined over coarse partofspeech tags noun verb adjective adverb pronoun article auxiliary preposition numeral and conjunctioneach rule specifies a partofspeech for the head and argument but does not provide ordering informationwe require that a minimum proportion of the posterior dependencies be instances of these rules in expectationin contrast to prior work on ruledriven dependency induction where each rule has a separately specified expectation we only set a single minimum expectation for the proportion of all dependencies that must match one of the rulesthis setup is more relevant for learning with universals since individual rule frequencies vary greatly between languagesenglishspecific dependency rules for english we also consider a small set of handcrafted dependency rules designed by michael collins3 for deterministic parsing shown in table 3unlike the universals from table 1 these rules alone are enough to construct a full dependency treethus they allow us to judge whether the model is able to improve upon a humanengineered deterministic parsermoreover with this dataset we can assess the additional benefit of using rules tailored to an individual language as opposed to universal rulesdatasets and evaluation we test the effectiveness of our grammar induction approach on english danish portuguese slovene spanish and swedishfor english we use the penn treebank transformed from cfg parses into dependencies with the collins head finding rules for the other languages we use data from the 2006 conllx shared task each dataset provides manually annotated partofspeech tags that are used for both training and testingfor comparison purposes with previous work we limit the crosslingual experiments to sentences of length 10 or less for english we also explore sentences of length up to 20the final output metric is directed dependency accuracythis is computed based on the viterbi parses produced using the final unnormalized variational distribution q over dependency structureshyperparameters and training regimes unless otherwise stated in experiments with rulebased constraints the expected proportion of dependencies that must satisfy those constraints is set to 08this threshold value was chosen based on minimal tuning on a single language and ruleset and carried over to each other experimental conditiona more detailed discussion of the thresholds empirical impact is presented in section 71variational approximations to the hdp are truncated at 10all hyperparameter values are fixed to 1 except α which is fixed to 10we also conduct a set of nosplit experiments to evaluate the importance of syntactic refinement in these experiments each coarse symbol corresponds to only one refined symbolthis is easily effected during inference by setting the hdp variational approximation truncation level to onefor each experiment we run 50 iterations of variational updates for each iteration we perform five steps of gradient search to compute the update for the variational distribution q over dependency structuresin the following section we present our primary crosslingual results using universal rules before performing a more indepth analysis of model properties such as sensitivity to ruleset selection and inference stability with universal dependency rules compared to dmv and pgi the dmv results are taken from bergkirkpatrick and klein bold numbers indicate the best result for each languagefor the full model the standard deviation in performance over five runs is indicated in parenthesestable 4 shows the performance of both our full model and its nosplit version using universal dependency rules across six languageswe also provide the performance of two baselines the dependency model with valence and the phylogenetic grammar induction model hdpdep outperforms both dmv and pgi across all six languagesagainst dmv we achieve an average absolute improvement of 241this improvement is expected given that dmv does not have access to the additional information provided through the universal rulespgi is more relevant as a point of comparison since it is able to leverage multilingual data to learn information similar to what we have declaratively specified using universal rulesspecifically pgi reduces induction ambiguity by connecting languagespecific parameters via phylogenetic priorswe find however that we outperform pgi by an average margin of 72 demonstrating the benefits of explicit rule specificationan additional point of comparison is the lexicalized unsupervised parser of headden iii et al which yields the current stateoftheart unsupervised accuracy on english at 688our method also outperforms this approach without employing lexicalization and sophisticated smoothing as they dothis result suggests that combining the complementary strengths of their approach and ours dency rules on english and spanishfor each rule we evaluate the model using the ruleset excluding that rule and list the most significant rules for each languagethe second last column is the absolute loss in performance compared to the setting where all rules are availablethe last column shows the percentage of the gold dependencies that satisfy the rule can yield further performance improvementstable 4 also shows the nosplit results where syntactic categories are not refinedwe find that such refinement usually proves to be beneficial yielding an average performance gain of 37however we note that the impact of incorporating splitting varies significantly across languagesfurther understanding of this connection is an area of future researchfinally we note that our model exhibits low variance for most languagesthis result attests to how the expectation constraints consistently guide inference toward highaccuracy areas of the search spaceablation analysis our next experiment seeks to understand the relative importance of the various universal rules from table 1we study how accuracy is affected when each of the rules is removed one at a time for english and spanishtable 5 lists the rules with the greatest impact on performance when removedwe note the high overlap between the most significant rules for english and spanishwe also observe that the relationship between a rules frequency and its importance for high accuracy is not straightforwardfor example the preposition noun rule whose removal degrades accuracy the most for both english and spanish is not the most frequent rule in either languagethis result suggests that some rules are harder to learn than others regardless of their frequency so their presence in the specified ruleset yields stronger performance gainsvarying the constraint threshold in our main experiments we require that at least 80 of the expected dependencies satisfy the rule constraintswe arrived at this threshold by tuning on the basis of english onlyas shown in figure 2 for english a broad band of threshold values from 75 to 90 yields results within 25 of each other with a slight peak at 80to further study the sensitivity of our method to how the threshold is set we perform post hoc experiments with other threshold values on each of the other languagesas figure 2 also shows on average a value of 80 is optimal across languages though again accuracy is stable within 25 between thresholds of 75 to 90these results demonstrate that a single threshold is broadly applicable across languagesinterestingly setting the threshold value independently for each language to its true proportion based on the gold dependencies does not achieve optimal table 6 directed accuracy of our model on sentences of length 10 or less and 20 or less from wsj with different rulesets and with no rules along with various baselines from the literatureentries in this table are numbered for ease of reference in the text performancethus knowledge of the true languagespecific rule proportions is not necessary for high accuracywe perform a set of additional experiments on english to gain further insight into hdpdeps behaviorour choice of language is motivated by the fact that a wide range of prior parsing algorithms were developed for and tested exclusively on englishthe experiments below demonstrate that 1 universal rules alone are powerful but languageand datasettailored rules can further improve performance 2 our model learns jointly from the rules and data outperforming a rulesonly deterministic parser 3 the way we incorporate posterior constraints outperforms the generalized expectation constraint framework and 4 our model exhibits low variance when seeded with different initializationsthese results are summarized in table 6 and discussed in detail below line numbers refer to entries in table 6each run of hdpdep below is with syntactic refinement enabledimpact of rules selection we compare the performance of hdpdep using the universal rules versus a set of rules designed for deterministically parsing the penn treebank as lines 1 and 5 of table 6 show languagespecific rules yield better performancefor sentences of length 10 or less the difference between the two rulesets is a relatively small 19 for longer sentences however the difference is a substantially larger 157this is likely because longer sentences tend to be more complex and thus exhibit more languageidiosyncratic dependenciessuch dependencies can be better captured by the refined languagespecific ruleswe also test model performance when no linguistic rules are available ie performing unconstrained variational inferencethe model performs substantially worse confirming that syntactic category refinement in a fully unsupervised setup is challenginglearning beyond provided rules since hdpdep is provided with linguistic rules a legitimate question is whether it improves upon what the rules encode especially when the rules are complete and languagespecificwe can answer this question by comparing the performance of our model seeded with the englishspecific rules against a deterministic parser that implements the same ruleslines 4 and 5 of table 6 demonstrate that the model outperforms a rulesonly deterministic parser by 38 for sentences of length 10 or less and by 35 for sentences of length 20 or lesscomparison with alternative semisupervised parser the dependency parser based on the generalized expectation criteria is the closest to our reported work in terms of techniqueto compare the two we run hdpdep using the 20 rules given by druck et al our model achieves an accuracy of 649 compared to 613 reported in their worknote that we do not rely on rulespecific expectation information as they do instead requiring only a single expectation constraint parameter4 model stability it is commonly acknowledged in the literature that unsupervised grammar induction methods exhibit sensitivity to initializationas in the previous section we find that the presence of linguistic rules greatly reduces this sensitivity for hdpdep the standard deviation over five randomly initialized runs with the englishspecific rules is 15 compared to 45 for the parser developed by headden iii et al and 80 for dmv in this paper we demonstrated that syntactic universals encoded as declarative constraints improve grammar inductionwe formulated a generative model for dependency structure that models syntactic category refinement and biases inference to cohere with the provided constraintsour experiments showed that encoding a compact wellaccepted set of languageindependent constraints significantly improves accuracy on multiple languages compared to the current stateoftheart in unsupervised parsingwhile our present work has yielded substantial gains over previous unsupervised methods a large gap still remains between our method and fully supervised techniquesin future work we intend to study ways to bridge this gap by 1 incorporating more sophisticated linguisticallydriven grammar rulesets to guide induction 2 lexicalizing the model and 3 combining our constraintbased approach with richer unsupervised models to benefit from their complementary strengthsthe authors acknowledge the support of the nsf we are especially grateful to michael collins for inspiring us toward this line of inquiry and providing deterministic rules for english parsingthanks to taylor bergkirkpatrick sabine iatridou ramesh sridharan and members of the mit nlp group for their suggestions and commentsany opinions findings conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views of the funding organizations
D10-1120
using universal linguistic knowledge to guide grammar inductionwe present an approach to grammar induction that utilizes syntactic universals to improve dependency parsing across a range of languagesour method uses a single set of manuallyspecified languageindependent rules that identify syntactic dependencies between pairs of syntactic categories that commonly occur across languagesduring inference of the probabilistic model we use posterior expectation constraints to require that a minimum proportion of the dependencies we infer be instances of these ruleswe also automatically refine the syntactic categories given in our coarsely tagged inputacross six languages our approach outperforms stateoftheart unsupervised methods by a significant marginour system is weakly supervised in which manually defined universal syntactic rules are used to constrain a probabilistic bayesian model
a latent variable model for geographic lexical variation the rapid growth of geotagged social media raises new computational possibilities for investigating geographic linguistic variation in this paper we present a multilevel generative model that reasons jointly about latent topics and geographical regions highlevel topics such as sports or entertainment are rendered differently in each geographic region revealing topicspecific regional distinctions applied to a new dataset of geotagged microblogs our model recovers coherent topics and their regional variants while identifying geographic areas of linguistic consistency the model also enables prediction of an authors geographic location from raw text outperforming both text regression and supervised topic models sociolinguistics and dialectology study how language varies across social and regional contextsquantitative research in these fields generally proceeds by counting the frequency of a handful of previouslyidentified linguistic variables pairs of phonological lexical or morphosyntactic features that are semantically equivalent but whose frequency depends on social geographical or other factors it is left to the experimenter to determine which variables will be considered and there is no obvious procedure for drawing inferences from the distribution of multiple variablesin this paper we present a method for identifying geographicallyaligned lexical variation directly from raw textour approach takes the form of a probabilistic graphical model capable of identifying both geographicallysalient terms and coherent linguistic communitiesone challenge in the study of lexical variation is that term frequencies are influenced by a variety of factors such as the topic of discoursewe address this issue by adding latent variables that allow us to model topical variation explicitlywe hypothesize that geography and topic interact as pure topical lexical distributions are corrupted by geographical factors for example a sportsrelated topic will be rendered differently in new york and californiaeach author is imbued with a latent region indicator which both selects the regional variant of each topic and generates the authors observed geographical locationthe regional corruption of topics is modeled through a cascade of logistic normal priorsa general modeling approach which we call cascading topic modelsthe resulting system has multiple capabilities including analyzing lexical variation by both topic and geography segmenting geographical space into coherent linguistic communities predicting author location based on text alonethis research is only possible due to the rapid growth of social mediaour dataset is derived from the microblogging website twitter1 which permits users to post short messages to the publicmany users of twitter also supply exact geographical coordinates from gpsenabled devices 2 yielding geotagged text datatext in computermediated communication is often more vernacular and as such it is more likely to reveal the influence of geographic factors than text written in a more formal genre such as news text we evaluate our approach both qualitatively and quantitativelywe investigate the topics and regions that the model obtains showing both commonsense results as well as lessobvious insights about slangquantitatively we apply our model to predict the location of unlabeled authors using text aloneon this task our model outperforms several alternatives including both discriminative text regression and related latentvariable approachesthe main dataset in this research is gathered from the microblog website twitter via its official apiwe use an archive of messages collected over the first week of march 2010 from the gardenhose sample stream3 which then consisted of 15 of all public messages totaling millions per daywe aggressively filter this stream using only messages that are tagged with physical coordinate pairs from a mobile client and whose authors wrote at least 20 messages over this periodwe also filter to include only authors who follow fewer than 1000 other people and have fewer than 1000 followerskwak et al find dramatic shifts in behavior among users with social graph connectivity outside of that range such users may be marketers celebrities with professional publicists news media sources etcwe also remove messages containing urls to eliminate bots posting information such as advertising or weather conditionsfor interpretability we restrict our attention to authors inside a bounding box around the contiguous yous states yielding a final sample of about 9500 users and 380000 messages totaling 47 million word tokenswe have made this dataset available online4 informal text from mobile phones is challenging to tokenize we adapt a publicly available tokenizer5 originally developed for twitter which preserves emoticons and blocks of punctuation and other symbols as tokensfor each users twitter feed we combine all messages into a single document we remove word types that appear in fewer than 40 feeds yielding a vocabulary of 5216 wordsof these 1332 do not appear in the english french or spanish dictionaries of the spellchecking program aspellevery message is tagged with a location but most messages from a single individual tend to come from nearby locations for modeling purposes we use only a single geographic location for each author simply taking the location of the first message in the samplethe authors in our dataset are fairly heavy twitter users posting an average of 40 messages per day we have little information about their demographics though from the text it seems likely that this user set skews towards teens and young adultsthe dataset covers each of the 48 contiguous united states and the district of columbiawe develop a model that incorporates two sources of lexical variation topic and geographical regionwe treat the text and geographic locations as outputs from a generative process that incorporates both topics and regions as latent variables6 during inference we seek to recover the topics and regions that best explain the observed dataat the base level of model are pure topics these topics are rendered differently in each regionwe call this general modeling approach a cascading topic model we describe it first in general terms before moving to the specific application to geographical variationcascading topic models generate text from a chain of random variableseach element in the chain defines a distribution over words and acts as the mean of the distribution over the subsequent element in the chainthus each element in the chain can be thought of as introducing some additional corruptionall words are drawn from the final distribution in the chainat the beginning of the chain are the priors followed by unadulerated base topics which may then be corrupted by other factors for example consider a base food topic that emphasizes words like dinner and delicious the corrupted foodcalifornia topic would place weight on these words but might place extra emphasis on other words like sproutsthe path through the cascade is determined by a set of indexing variables which may be hidden or observedas in standard latent dirichlet allocation the base topics are selected by a pertoken hidden variable zin the geographical topic model the next level corresponds to regions which are selected by a perauthor latent variable r formally we draw each level of the cascade from a normal distribution centered on the previous level the final multinomial distribution over words is obtained by exponentiating and normalizingto ensure tractable inference we assume that all covariance matrices are uniform diagonal ie ai with a 0 this means we do not model interactions between wordsthe application of cascading topic models to geographical variation is straightforwardeach document corresponds to the entire twitter feed of a given author during the time period covered by our corpusfor each author the latent variable r corresponds to the geographical region of the author which is not observedas described above r selects a corrupted version of each topic the kth basic topic has mean µk with uniform diagonal covariance u2k for region j we can draw the regionallycorrupted topic from the normal distribution ηjk nbecause η is normallydistributed it lies not in the simplex but in 8wwe deterministically compute multinomial parameters β by exponentiating and normalizing βjk exp ei exp jk this normalization could introduce identifiability problems as there are multiple settings for η that maximize p however this difficulty is obviated by the priors given µ and u2 there is only a single η that maximizes pp similarly only a single µ maximizes ppthe observed latitude and longitude denoted y are normally distributed and conditioned on the region with mean νr and precision matrix ar indexed by the region r the region index r is itself drawn from a single shared multinomial ϑthe model is shown as a plate diagram in figure 1given a vocabulary size w the generative story is as follows draw the base topic from a normal distribution with uniform diagonal covariance µk n draw the regional variance from a gamma distribution u2k generate regional variants for each region j j draw the regiontopic ηjk from a normal distribution with uniform diagonal covariance ηjk n convert ηjk into a multinomial distribution over words by exponentiating and normalizing where the denominator sums over the vocabularywe apply meanfield variational inference a fullyfactored variational distribution q is chosen to minimize the kullbackleibler divergence from the true distributionmeanfield variational inference with conjugate priors is described in detail elsewhere we restrict our focus to the issues that are unique to the geographic topic modelwe place variational distributions over all latent variables of interest θ z are ϑ η µ σ2 ν and λ updating each of these distributions in turn until convergencethe variational distributions over θ and ϑ are dirichlet and have closed form updates each can be set to the sum of the expected counts plus a term from the prior the variational distributions q and q are categorical and can be set proportional to the expected joint likelihoodto set q we marginalize over r and vice versa7 the updates for the multivariate gaussian spatial parameters ν and λ are described by penny the variational regiontopic distribution ηjk is normal with uniform diagonal covariance for tractabilitythroughout we will write hxi to indicate the expectation of x under the variational distribution qthus the vector mean of the distribution q is written hηjki while the variance of q is written vto update the mean parameter hηjki we maximize the contribution to the variational bound l from the relevant terms 7thanks to the naive mean field assumption we can marginalize over z by first decomposing across all nd words and then summing over q with the first term representing the likelihood of the observed words and the second term corresponding to the priorthe likelihood term requires the expectation hlog βi but this is somewhat complicated by the normalizer ewi exp which sums over all terms in the vocabularyas in previous work on logistic normal topic models we use a taylor approximation for this term the prior on η is normal so the contribution from the second term of the objective is jk µki2iwe introduce the following notation for expected counts n indicates the expected count of term i in region j and topic k and n ei nafter some calculus we can write the gradient lhη which has an intuitive interpretationthe first two terms represent the difference in expected counts for term i under the variational distributions q and q this difference goes to zero when β jk perfectly matches nnthe third term penalizesη jkfor deviating from its prior µ k but this penalty is proportional to the expected inverse variance hσ2 k iwe apply gradient ascent to maximize the objective l a similar set of calculations gives the gradient for the variance of η these are described in an forthcoming appendixthe base topic parameters are µk and σ2k in the variational distribution q is normally distributed and q is gamma distributednote that µk and σ2k affect only the regional word distributions 7 jkan advantage of the logistic normal is that the variational parameters over µk are available in closed form where j indicates the number of regionsthe expectation of the base topic µ incorporates the prior and the average of the generated regiontopics these two components are weighted respectively by the expected variance of the regiontopics and the prior topical variance b2the posterior variance v is a harmonic combination of the prior variance b2 and the expected variance of the region topicsthe variational distribution over the regiontopic variance σ2k has gamma parametersthese parameters cannot be updated in closed form so gradient optimization is again requiredthe derivation of these updates is more involved and is left for a forthcoming appendixvariational scheduling and initialization are important aspects of any hierarchical generative model and are often underdiscussedin our implementation the variational updates are scheduled as follows given expected counts we iteratively update the variational parameters on the regiontopics 77 and the base topics µ until convergencewe then update the geographical parameters v and a as well as the distribution over regions 0finally for each document we iteratively update the variational parameters over 0 z and r until convergence obtaining expected counts that are used in the next iteration of updates for the topics and their regional variantswe iterate an outer loop over the entire set of updates until convergencewe initialize the model in a piecewise fashionfirst we train a dirichlet process mixture model on the locations y using variational inference on the truncated stickbreaking approximation this automatically selects the number of regions j and gives a distribution over each region indicator rd from geographical information alonewe then run standard latent dirichlet allocation to obtain estimates of z for each token from this initialization we can compute the first set of expected counts which are used to obtain initial estimates of all parameters needed to begin variational inference in the full modelthe prior a is the expected mean of each topic µ for each term i we set a log n log n where n is the total count of i in the corpus and n ei nthe variance prior b2 is set to 1 and the prior on σ2 is the gamma distribution 9 encouraging minimal deviation from the base topicsthe symmetric dirichlet prior on 0 is set to 12 and the symmetric dirichlet parameter on ϑ is updated from weak hyperpriors finally the geographical model takes priors that are linked to the data for each region the mean is very weakly encouraged to be near the overall mean and the covariance prior is set by the average covariance of clusters obtained by running kmeansfor a quantitative evaluation of the estimated relationship between text and geography we assess our models ability to predict the geographic location of unlabeled authors based on their text alone8 this task may also be practically relevant as a step toward applications for recommending local businesses or social connectionsa randomlychosen 60 of authors are used for training 20 for development and the remaining 20 for final evaluationwe compare several approaches for predicting author location we divide these into latent variable generative models and discriminative approaches8alternatively one might evaluate the attributed regional memberships of the words themselveswhile the dictionary of american regional english attempts a comprehensive list of all regionallyaffiliated terms it is based on interviews conducted from 19651970 and the final volume is not yet completegeographic topic model this is the full version of our system as described in this paperto predict the unseen location yd we iterate until convergence on the variational updates for the hidden topics zd the topic proportions 9d and the region rdfrom rd the location can be estimated as yd j argmaxy j pqthe development set is used to tune the number of topics and to select the best of multiple random initializationsmixture of unigrams a core premise of our approach is that modeling topical variation will improve our ability to understand geographical variationwe test this idea by fixing k 1 running our system with only a single topicthis is equivalent to a bayesian mixture of unigrams in which each author is assigned a single regional unigram language model that generates all of his or her textthe development set is used to select the best of multiple random initializationssupervised latent dirichlet allocation in a more subtle version of the mixtureofunigrams model we model each author as an admixture of regionsthus the latent variable attached to each author is no longer an index but rather a vector on the simplexthis model is equivalent to supervised latent dirichlet allocation each topic is associated with equivariant gaussian distributions over the latitude and longitude and these topics must explain both the text and the observed geographical locationsfor unlabeled authors we estimate latitude and longitude by estimating the topic proportions and then applying the learned geographical distributionsthis is a linear prediction f for an authors topic proportions zd and topicgeography weights a e r2ktext regression we perform linear regression to discriminatively learn the relationship between words and locationsusing term frequency features xd for each author we predict locations with wordgeography weights a e r2w f which obtained good results on other textbased prediction tasks regularization parameters were tuned on the development setthe l1 penalty outperformed l2 and mixtures of l1 and l2note that for both wordlevel linear regression here and the topiclevel linear regression in slda the choice of squared euclidean distance dovetails with our use of spatial gaussian likelihoods in the geographic topic models since optimizing a is equivalent to maximum likelihood estimation under the assumption that locations are drawn from equivariant circular gaussians centered around each f linear predictionwe experimented with decorrelating the location dimensions by projecting yd into the principal component space but this did not help text regressionknearest neighbors linear regression is a poor model for the multimodal density of human populationsas an alternative baseline we applied supervised knearest neighbors to predict the location yd as the average of the positions of the k most similar authors in the training setwe computed termfrequency inversedocument frequency features and applied cosine similarity over their first 30 principal components to find the neighborsthe choices of principal components idf weighting and neighborhood size k 20 were tuned on the development setour principle error metrics are the mean and median distance between the predicted and true location in kilometers9 because the distance error may be difficult to interpret we also report accuracy of classification by state and by region of the united statesour data includes the 48 contiguous states plus the district of columbia the yous census bureau divides these states into four regions west midwest northeast and south10 note that while major population centers straddle several state lines most region boundaries are far from the largest cities resulting in a clearer analysisas shown in table 1 the geographic topic model achieves the strongest performance on all metricsall differences in performance between systems are statistically significant using the wilcoxonmannwhitney test for regression error and the k2 test for classification accuracyfigure 2 shows how performance changes as the number of topics variesnote that the geographic topic model and the mixture of unigrams use identical code and parametrization the only difference is that the geographic topic model accounts for topical variation while the mixture of unigrams sets k 1these results validate our basic premise that it is important to model the interaction between topical and geographical variationtext regression and supervised lda perform especially poorly on the classification metricboth methods make predictions that are averaged across earths surface requires computing or approximating the great circle distance we use the haversine formula for the continental yous the relationship between degrees and kilometers is nearly linear but extending the model to a continental scale would require a more sophisticated approach each word in the document in text regression each word is directly multiplied by a feature weight in supervised lda the word is associated with a latent topic first and then multiplied by a weightfor these models all words exert an influence on the predicted location so uninformative words will draw the prediction towards the center of the mapthis yields reasonable distance errors but poor classification accuracywe had hoped that knearest neighbors would be a better fit for this metric but its performance is poor at all values of k of course it is always possible to optimize classification accuracy directly but such an approach would be incapable of predicting the exact geographical location which is the focus of our evaluation note that the geographic topic model is also not trained to optimize classification accuracyour model permits analysis of geographical variation in the context of topics that help to clarify the significance of geographicallysalient termstable 2 shows a subset of the results of one randomlyinitialized run including five handchosen topics and five regions terms were selected by logodds comparisonfor the base topics we show the ten strongest terms in each topic as compared to the background word distributionfor the regional variants we show terms that are strong both regionally and topically specifically we select terms that are in the top 100 compared to both the background distribution and to the base topicthe names for the topics and regions were chosen by the authorsnearly all of the terms in column 1 refer to sports teams athletes and place names encouragingly terms tend to appear in the regions where their referents residecolumn 2 contains several proper nouns mostly referring to popular music figures 11 columns 35 are more conversationalspanishlanguage terms tend to appear in regions with large spanishspeaking populationsit is also telling that these terms appear in topics with emoticons and slang abbreviations which may transcend linguistic barriersother terms refer to people or subjects that may be especially relevant in certain regions tacos appears in the southern california region and cab in the new york region tupac refers to a rap musician from los angeles and wiz refers to a rap musician from pittsburgh not far from the center of the lake erie regiona large number of slang terms are found to have strong regional biases suggesting that slang may depend on geography more than standard english doesthe terms af and hella display especially strong regional affinities appearing in the regional variants of multiple topics northern and southern california use variant spellings koo and coo to express the same meaning11this analysis is from an earlier version of our dataset that contained some twitterbots including one from a bostonarea radio stationthe bots were purged for the evaluation in section 6 though the numerical results are nearly identical term definition term definition af as fuck jk just kidding coo cool jp just playing fasho for sure koo cool gna going to lol laugh out loud hella very nm nothing much hr hour od overdone iam i am omw on my way i am about to i am going to smh shake my head imm i am suttin something iono i do not know wassup what is up lames lame wyd what are you dopeople ingwhile research in perceptual dialectology does confirm the link of hella to northern california we caution that our findings are merely suggestive and a more rigorous analysis must be undertaken before making definitive statements about the regional membership of individual termswe view the geographic topic model as an exploratory tool that may be used to facilitate such investigationsfigure 3 shows the regional clustering on the training set obtained by one run of the modeleach point represents an author and the ellipses represent the bivariate gaussians for each regionthere are nine compact regions for major metropolitan areas two slightly larger regions that encompass florida and the area around lake erie and two large regions that partition the country roughly into north and souththe relationship between language and geography has been a topic of interest to linguists since the nineteenth century an early work of particular relevance is kuraths word geography of the eastern united states in which he conducted interviews and then mapped the occurrence of equivalent word pairs such as stoop and porchthe essence of this approachidentifying variable pairs and measuring their frequencies remains a dominant methodology in both dialectology and sociolinguistics within this paradigm computational techniques are often applied to post hoc analysis logistic regression and mixedeffects models are used to measure the contribution of individual variables while hierarchical clustering and multidimensional scaling enable aggregated inference across multiple variables however in all such work it is assumed that the relevant linguistic variables have already been identifieda timeconsuming process involving considerable linguistic expertisewe view our work as complementary to this tradition we work directly from raw text identifying both the relevant features and coherent linguistic communitiesan active recent literature concerns geotagged information on the web such as search queries and tagged images this research identifies the geographic distribution of individual queries and tags but does not attempt to induce any structural organization of either the text or geographical space which is the focus of our researchmore relevant is the work of mei et al in which the distribution over latent topics in blog posts is conditioned on the geographical location of the authorthis is somewhat similar to the supervised lda model that we consider but their approach assumes that a partitioning of geographical space into regions is already givenmethodologically our cascading topic model is designed to capture multiple dimensions of variability topics and geographymei et al include sentiment as a second dimension in a topic model using a switching variable so that individual word tokens may be selected from either the topic or the sentimenthowever our hypothesis is that individual word tokens reflect both the topic and the geographical aspectsharing this intuition paul and girju build topicaspect models for the cross product of topics and aspectsthey do not impose any regularity across multiple aspects of the same topic so this approach may not scale when the number of aspects is large we address this issue using cascading distributions when the observed data for a given regiontopic pair is low the model falls back to the base topicthe use of cascading logistic normal distributions in topic models follows earlier work on dynamic topic models this paper presents a model that jointly identifies words with high regional affinity geographicallycoherent linguistic regions and the relationship between regional and topic variationthe key modeling assumption is that regions and topics interact to shape observed lexical frequencieswe validate this assumption on a prediction task in which our model outperforms strong alternatives that do not distinguish regional and topical variationwe see this work as a first step towards a unsupervised methodology for modeling linguistic variation using raw textindeed in a study of morphosyntactic variation szmrecsanyi finds that by the most generous measure geographical factors account for only 33 of the observed variationour analysis might well improve if nongeographical factors were considered including age race gender income and whether a location is urban or ruralin some regions estimates of many of these factors may be obtained by crossreferencing geography with demographic datawe hope to explore this possibility in future workwe would like to thank amr ahmed jonathan chang shay cohen william cohen ross curtis miro dudık scott kiesling seyoung kim and the anonymous reviewersthis research was enabled by googles support of the worldly knowledge project at cmu afosr fa9550010247 onr n0001140910758 nsf career dbi0546594 nsf iis0713379 and an alfred p sloan fellowship
D10-1124
a latent variable model for geographic lexical variationthe rapid growth of geotagged social media raises new computational possibilities for investigating geographic linguistic variationin this paper we present a multilevel generative model that reasons jointly about latent topics and geographical regionshighlevel topics such as sports or entertainment are rendered differently in each geographic region revealing topicspecific regional distinctionsapplied to a new dataset of geotagged microblogs our model recovers coherent topics and their regional variants while identifying geographic areas of linguistic consistencythe model also enables prediction of an authors geographic location from raw text outperforming both text regression and supervised topic modelswe gathered the text and geographical locations of 9250 microbloggers on the website twittercom to construct a datasetwe collected about 380000 tweets from twitter official apiwe predict locations based on gaussian distributions over the earth surface as part of a hierarchical bayesian modelwe consider all tweets of a user concatenated as a single document and use the earliest collected gpsassigned location as the gold location
dual decomposition for parsing with nonprojective head automata this paper introduces algorithms for nonparsing based on decomposi we focus on parsing algorithms for nonhead a generalization of headautomata models to nonprojective structures the dual decomposition algorithms are simple and efficient relying on standard dynamic programming and minimum spanning tree algorithms they provably solve an lp relaxation of the nonprojective parsing problem empirically the lp relaxation is very often tight for many languages exact solutions are achieved on over 98 of test sentences the accuracy of our models is higher than previous work on a broad range of datasets nonprojective dependency parsing is useful for many languages that exhibit nonprojective syntactic structuresunfortunately the nonprojective parsing problem is known to be nphard for all but the simplest models there has been a long history in combinatorial optimization of methods that exploit structure in complex problems using methods such as dual decomposition or lagrangian relaxation thus far however these methods are not widely used in nlpthis paper introduces algorithms for nonprojective parsing based on dual decompositionwe focus on parsing algorithms for nonprojective head automata a generalization of the headautomata models of eisner and alshawi to nonprojective structuresthese models include nonprojective dependency parsing models with higherorder dependency relations as a special casealthough decoding of full parse structures with nonprojective head automata is intractable we leverage the observation that key components of the decoding can be efficiently computed using combinatorial algorithmsin particular in this paper we first give the definition for nonprojective head automata and describe the parsing algorithmthe algorithm can be viewed as an instance of lagrangian relaxation we describe this connection and give convergence guarantees for the methodwe describe a generalization to models that include grandparent dependencieswe then introduce a perceptrondriven training algorithm that makes use of point 1 abovewe describe experiments on nonprojective parsing for a number of languages and in particular compare the dual decomposition algorithm to approaches based on generalpurpose linear programming or integer linear programming solvers the accuracy of our models is higher than previous work on a broad range of datasetsthe method gives exact solutions to the decoding problem together with a certificate of optimality on over 98 of test examples for many of the test languages with parsing times ranging between 0021 secondssentence for the most simple languagesmodels to 0295 secondssentence for the most complex settingsthe method compares favorably to previous work using lpilp formulations both in terms of efficiency and also in terms of the percentage of exact solutions returnedwhile the focus of the current paper is on nonprojective dependency parsing the approach opens up new ways of thinking about parsing algorithms for lexicalized formalisms such as tag ccg and projective head automatamcdonald et al describe mstbased parsing for nonprojective dependency parsing models with arcfactored decompositions mcdonald and pereira make use of an approximate algorithm for parsing with more complex modelsmcdonald and pereira and mcdonald and satta describe complexity results for nonprojective parsing showing that parsing for a variety of models is nphardriedel and clarke describe ilp methods for the problem martins et al recently introduced alternative lp and ilp formulationsour algorithm differs in that we do not use generalpurpose lp or ilp solvers instead using an mst solver in combination with dynamic programming thus we leverage the underlying structure of the problem thereby deriving more efficient decoding algorithmsboth dual decomposition and lagrangian relaxation have a long history in combinatorial optimizationour work was originally inspired by recent work on dual decomposition for inference in graphical models however the nonprojective parsing problem has a very different structure from these models and the decomposition we use is very different in nature from those used in graphical modelsother work has made extensive use of decomposition approaches for efficiently solving lp relaxations for graphical models methods that incorporate combinatorial solvers within loopy belief propagation are also closely related to our approachunlike lbp our method has strong theoretical guarantees such as guaranteed convergence and the possibility of a certificate of optimalityfinally in other recent work rush et al describe dual decomposition approaches for other nlp problemsthis section describes a particular class of models sibling models the next section describes a dualdecomposition algorithm for decoding these modelsconsider the dependency parsing problem for a sentence with n wordswe define the index set for dependency parsing to be z i e 0 n j e 1 n i ja dependency parse is a vector y y e z where y 1 if a dependency with head word i and modifier j is in the parse 0 otherwisewe use i 0 for the root symbolwe define y to be the set of all wellformed nonprojective dependency parses given a function f y h r that assigns scores to parse trees the optimal parse is a particularly simple definition of f is f eet yθ where θ is the score for dependency models with this form are often referred to as arcfactored modelsin this case the optimal parse tree y can be found efficiently using mst algorithms this paper describes algorithms that compute y for more complex definitions of f in this section we focus on algorithms for models that capture interactions between sibling dependenciesto this end we will find it convenient to define the following notationgiven a vector y define hence yi specifies the set of modifiers to word i note that the vectors yi for i 0 n form a partition of the full set of variableswe then assume that f takes the form thus f decomposes into a sum of terms where each fi considers modifiers to the ith word alonein the general case finding y argmaxyey f under this definition of f is an nphard problemhowever for certain definitions of fi it is possible to efficiently compute argmaxyizi fi for any value of i typically using dynamic programmingin these cases we can efficiently compute where z z zi e zi for i 0 n by simply computing zi argmaxzizi fi for i 0 n eq3 can be considered to be an approximation to eq1 where we have replaced y with zwe will make direct use of this approximation in the dual decomposition parsing algorithmnote that y c z and in all but trivial cases y is a strict subset of zfor example a structure z e z could have z z 1 for some it could contain longer cycles or it could contain words that do not modify exactly one headnevertheless with suitably powerful functions fifor example functions based on discriminative modelsz may be a good approximation to ylater we will see that dual decomposition can effectively use mst inference to rule out illformed structureswe now give the main assumption underlying sibling models assumption 1 a model f satisfies the siblingdecomposition assumption if 1 f eni0 fi for some set offunctions f0 fn2 for any i e 0 n for any value of the variables you e r for j 1 n it is possible to compute the second condition includes additional terms involving you variables that modify the scores of individual dependenciesthese terms are benign for most definitions of fi in that they do not alter decoding complexitythey will be of direct use in the dual decomposition parsing algorithmexample 1 bigram sibling modelsrecall that yi is a binary vector specifying which words are modifiers to the headword idefine l1 lp to be the sequence of left modifiers to word i under yi and r1 rq to be the set of right modifiers y 0 and y y 1 in this case p 1 l1 2 and q 1 r1 4in bigram sibling models we have where l0 r0 start is the initial state and lp1 rq1 end is the end statethe functions gl and gr assign scores to bigram dependencies to the left and right of the headunder this model calculating argmaxyizi ej youy takes o time using dynamic programming hence the model satisfies assumption 1example 2 head automata headautomata models constitute a second important model type that satisfy the siblingdecomposition assumption these models make use of functions gr where s e s s0 e s are variables in a set of possible states s and r is an index of a word in the sentence such that i l du by standard results argmaxyy h eij youysubgradient optimization methods are iterative algorithms with updates that are similar to gradient descent we omit the details except to note that when the lp relaxation is not tight the optimal primal solution to the lp relaxation could be recovered by averaging methods where αk is a step sizeit is easily verified that the algorithm in figure 1 uses precisely these updateswith an appropriate choice of the step sizes αk the subgradient method can be shown to solve the dual problem iesee korte and vygen page 120 for detailsas mentioned before the dual provides an upper bound on the optimum of the primal problem however we do not necessarily have strong dualityie equality in the above equation because the sets i and y are discrete setsthat said for some functions h and f strong duality does hold as stated in the following l z y l where the last equality is because y z are defined as the respective argmaxsthus the inequality in eq9 is tight and z and you are primal and dual optimalalthough the algorithm is not guaranteed to satisfy y z for some k by theorem 1 if it does reach such a state then we have the guarantee of an exact solution to eq4 with the dual solution you providing a certificate of optimalitywe show in the experiments that this occurs very frequently in spite of the parsing problem being nphardit can be shown that eq8 is the dual of an lp relaxation of the original problemwhen the conditions of theorem 1 are satisfied it means that the lp relaxation is tight for this instancefor brevityin this section we extend the approach to consider grandparent relationsin grandparent models each parse tree y is represented as a vector where we have added a second set of duplicate variables y for all e zthe set of all valid parse trees is then defined as so as before yi contains variables y which indicate which words modify the ith wordin addition yi includes y variables that indicate the word that word i itself modifiesthe set of all possible values of yi is now hence the y variables can take any values but only one of the y variables can be equal to 1 as before we define i y yi e ii for i 0 nwe introduce the following assumption again it follows that we can approxiresulting vector z may be deficient in two respectsfirst the variables z may not form a wellformed directed spanning treesecond we may have z z for some values of example 3 grandparentsibling models an important class of models that satisfy assumption 2 are defined as followsagain for a vector yi define l1 lp to be the sequence of left modifiers to word i under yi and r1 rq to be the set of right modifiersdefine k to the value for k such that y 1then the model is defined as follows this is very similar to the bigramsibling model but with the modification that the gl and gr functions depend in addition on the value for kthis allows these functions to model grandparent dependencies such as and sibling dependencies such as finding zi under the definition can be accomplished in o time by decoding the model using dynamic programming separately for each of the o possible values of k and picking the value for k that gives the maximum value under these decodingsa dualdecomposition algorithm for models that satisfy the gsd assumption is shown in figure 2the algorithm can be justified as an instance of lagrangian relaxation applied to the problem the algorithm employs two sets of lagrange multipliers you and v corresponding to constraints in eqs11 and 12as in theorem 1 if at any point in the algorithm z y then y is an exact solution to the problem in eq10in our experiments we make use of discriminative linear models where for an input sentence x the score for a parse y is f w φ where w e rd is a parameter vector and φ e rd is a featurevector representing parse tree y in conjunction with sentence xwe will assume that the features decompose in the same way as the siblingdecomposable or grandparentsiblingdecomposable models that is φ pni0 φ for some feature vector definition φin the bigram sibling models in our experiments we assume that where as before l1 lp and r1 rq are left and right modifiers under yi and where φl and φr are feature vector definitionsin the grandparent models in our experiments we use a similar definition with feature vectors φl and φr where k is the parent for word i under yiwe train the model using the averaged perceptron for structured problems given the ith example in the training set y the perceptron updates are as follows the first step involves inference over the set z rather than y as would be standard in the perceptronthus decoding during training can be achieved by dynamic programming over head automata alone which is very efficientour training approach is closely related to local training methods we have found this method to be effective very likely because z is a superset of your training algorithm is also related to recent work on training using outer bounds note however that the lp relaxation optimized by dual decomposition is significantly tighter than zthus an alternative approach would be to use the dual decomposition algorithm for inference during trainingwe report results on a number of data setsfor comparison to martins et al we perform experiments for danish dutch portuguese slovene swedish and turkish data from the conllx shared task and english data from the conll2008 shared task we use the official trainingtest splits for these data sets and the same evaluation methodology as martins et al for comparison to smith and eisner we also report results on danish and dutch using their alternate trainingtest splitfinally we report results on the english wsj treebank and the prague treebankwe use feature sets that are very similar to those described in carreras we use marginalbased pruning using marginals calculated from an arcfactored spanning tree model using the matrixtree theorem in all of our experiments we set the value k the maximum number of iterations of dual decomposition in figures 1 and 2 to be 5000if the algorithm does not terminateie it does not return z within 5000 iterationswe simply take the parse y with the maximum value of f as the output from the algorithmat first sight 5000 might appear to be a large number but decoding is still fastsee sections 73 and 74 for discussion2 the strategy for choosing step sizes αk is described in appendix a along with other detailswe first discuss performance in terms of accuracy success in recovering an exact solution and parsing speedwe then describe additional experiments examining various aspects of the algorithmtable 1 shows results for previous work on the various data sets and results for an arcfactored model with pure mst decoding with our features for dependency accuracywe also show results for the bigramsibling and grandparentsibling models under dual decompositionboth the bigramsibling and gs models show large improvements over the arcfactored approach they also compare favorably to previous workfor example the gs model gives better results than all results reported in the conllx shared task on all languagesnote that we use different feature sets from both martins et al and smith and eisner next we consider how often our algorithms return an exact solution to the original optimization problem with a certificateie how often the algorithms in figures 1 and 2 terminate with y z for some value of k 0 since f lthen define αk s where 77k is the number of times that l l for k k hence the learning rate drops at a rate of 1 where t is the number of times that the dual increases from one iteration to the nexta2 use of the y parameters the parsing algorithms both consider a generalized problem that includes y parameterswe now describe how these can be usefulrecall that the optimization problem is to solve argmaxzezyey f h subject to a set of agreement constraintsin our models f can be written as f eij αz where f includes only terms depending on higherorder and α are weights that consider the dependency between i and j alonefor any value of 0 q 1 the problem argmaxzezyey f2 h2 is equivalent to the original problem if f2 f j αz and h2 q eij αywe have simply shifted the α weights from one model to the otherwhile the optimization problem remains the same the algorithms in figure 1 and 2 will converge at different rates depending on the value for qin our experiments we set q 0001 which puts almost all the weight in the headautomata models but allows weights on spanning tree edges to break ties in mst inference in a sensible waywe suspect this is important in early iterations of the algorithm when many values for you or v will be zero and where with q 0 many spanning tree solutions y would be essentially random leading to very noisy updates to the you and v valueswe have not tested other values for q
D10-1125
dual decomposition for parsing with nonprojective head automatathis paper introduces algorithms for nonprojective parsing based on dual decompositionwe focus on parsing algorithms for nonprojective head automata a generalization of headautomata models to nonprojective structuresthe dual decomposition algorithms are simple and efficient relying on standard dynamic programming and minimum spanning tree algorithmsthey provably solve an lp relaxation of the nonprojective parsing problemempirically the lp relaxation is very often tight for many languages exact solutions are achieved on over 98 of test sentencesthe accuracy of our models is higher than previous work on a broad range of datasetswe consider thirdorder features such as grandsiblings and trisiblings
multisource transfer of delexicalized dependency parsers we present a simple method for transferring dependency parsers from source languages with labeled training data to target languages without labeled training data we first demonstrate that delexicalized parsers can be directly transferred between languages producing significantly higher accuracies than unsupervised parsers we then use a constraint driven learning algorithm where constraints are drawn from parallel corpora to project the final parser unlike previous work on projecting syntactic resources we show that simple methods for introducing multiple source languages can significantly improve the overall quality of the resulting parsers the projected parsers from our system result in stateoftheart performance when compared to previously studied unsupervised and projected parsing systems across eight different languages statistical parsing has been one of the most active areas of research in the computational linguistics community since the construction of the penn treebank this includes work on phrasestructure parsing dependency parsing as well as a number of other formalisms as underlying modeling techniques have improved these parsers have begun to converge to high levels of accuracy for english newswire textsubsequently researchers have begun to look at both porting these parsers to new domains and constructing parsers for new languages one major obstacle in building statistical parsers for new languages is that they often lack the manually annotated resources available for englishthis observation has led to a vast amount of research on unsupervised grammar induction grammar induction systems have seen large advances in quality but parsing accuracies still significantly lag behind those of supervised systemsfurthermore they are often trained and evaluated under idealized conditions eg only on short sentences or assuming the existence of goldstandard partofspeech tags1 the reason for these assumptions is clearunsupervised grammar induction is difficult given the complexity of the analysis spacethese assumptions help to give the model tractionthe study of unsupervised grammar induction has many meritsmost notably it increases our understanding of how computers learn in the absence of any explicit feedbackhowever the gold pos tag assumption weakens any conclusions that can be drawn as partofspeech are also a form of syntactic analysis only shallowerfurthermore from a practical standpoint it is rarely the case that we are completely devoid of resources for most languagesthis point has been made by studies that transfer parsers to new languages by projecting syntax across word alignments extracted from parallel corpora although again most of these studies also assume the existence of pos tagsin this work we present a method for creating dependency parsers for languages for which no labeled training data is availablefirst we train a source side english parser that crucially is delexicalized so that its predictions rely soley on the partofspeech tags of the input sentence in the same vein as zeman and resnik we empirically show that directly transferring delexicalized models already outperforms stateoftheart unsupervised parsers by a significant marginthis result holds in the presence of both gold pos tags as well as automatic tags projected from englishthis emphasizes that even for languages with no syntactic resources or possibly even parallel data simple transfer methods can already be more powerful than grammar induction systemsnext we use this delexicalized english parser to seed a perceptron learner for the target languagethe model is trained to update towards parses that are in high agreement with a source side english parse based on constraints drawn from alignments in the parallel datawe use the augmentedloss learning procedure which is closely related to constraint driven learning the resulting parser consistently improves on the directly transferred delexicalized parser reducing relative errors by 8 on average and as much as 18 on some languagesfinally we show that by transferring parsers from multiple source languages we can further reduce errors by 16 over the directly transferred english baselinethis is consistent with previous work on multilingual partofspeech and grammar induction that shows that adding languages leads to improvementswe present a comprehensive set of experiments on eight indoeuropean languages for which a significant amount of parallel data existswe make no language specific enhancements in our experimentswe report results for sentences of all lengths as well as with gold and automatically induced partofspeech tagswe also report results on sentences of length 10 or less with gold partofspeech tags to compare with previous workour results consistently outperform the previous stateoftheart across all languages and training configurationsin this paper we focus on transferring dependency parsers between languagesa dependency parser takes a tokenized input sentence and produces a connected tree where directed arcs represent a syntactic headmodifier relationshipan example of such a tree is given in figure 1dependency tree arcs are often labeled with the role of the syntactic relationship eg is to hearing might be labeled as subjecthowever we focus on unlabeled parsing in order to reduce problems that arise due to different treebank annotation schemesof course even for unlabeled dependencies significant variations in the annotation schemes remainfor example in the danish treebank determiners govern adjectives and nouns in noun phrases while in most other treebanks the noun is the head of the noun phraseunlike previous work we do not apply any transformations to the treebanks which makes our results easier to reproduce but systematically underestimates accuracythe treebank data in our experiments are from the conll sharedtasks on dependency parsing we use english only as a source language throughout the paperadditionally we use the following eight languages as both source and target languages danish dutch german greek italian portuguese spanish and swedish for languages that were included in both the 2006 and 2007 tasks we used the treebank from the latterwe focused on this subset of languages because they are indoeuropean and a significant amount of parallel data exists for each languageby presenting results on eight languages our study is already more comprehensive than most previous work in this areahowever the restriction to indoeuropean languages does make the results less conclusive when one wishes to transfer a parser from english to chinese for exampleto account for this we report additional results in the discussion for nonindoeuropean languagesfor all data sets we used the predefined training and testing splitsour approach relies on a consistent set of partofspeech tags across languages and treebanksfor this we used the universal tagset from petrov et al which includes noun verb adj adv pron det adp num conj prt punc and x similar tagsets are used by other studies on grammar induction and projection for all our experiments we replaced the language specific partofspeech tags in the treebanks with these universal tagslike all treebank projection studies we require a corpus of parallel text for each pair of languages we studyfor this we used the europarl corpus version 5 the corpus was preprocessed in standard ways and word aligned by running six iterations of ibm model 1 followed by six iterations of the hmm model in both directionswe then intersect word alignments to generate onetoone alignmentsall of our parsing models are based on the transitionbased dependency parsing paradigm specifically all models use an arceager transition strategy and are trained using the averaged perceptron algorithm as in zhang and clark with a beam size of 8the features used by all models are the partofspeech tags of the first four words on the buffer and of the top two words on the stack the word identities of the first two words on the buffer and of the top word on the stack the word identity of the syntactic head of the top word on the stack all feature conjunctions are includedfor treebanks with nonprojective trees we use the pseudoprojective parsing technique to transform the treebank into projective structures we focus on using this parsing system for two reasonsfirst the parser is near stateoftheart on english parsing benchmarks and second and more importantly the parser is extremely fast to train and run making it easy to run a large number of experimentspreliminary experiments using a different dependency parser mstparser resulted in similar empirical observationsall systems are evaluated using unlabeled attachment score which is the percentage of words in a corpus that modify the correct head furthermore we evaluate with both goldstandard partofspeech tags as well as predicted partofspeech tags from the projected partofspeech tagger of das and petrov 2 this tagger relies only on labeled training data for english and achieves accuracies around 85 on the languages that we considerwe evaluate in the former setting to compare to previous studies that make this assumptionwe evaluate in the latter setting to measure performance in a more realistic scenario when no target language resources are availableto simplify discussion we first focus on the most common instantiation of parser transfer in the literature transferring from english to other languagesin the next section we expand our system to allow for the inclusion of multiple source languageswe start with the observation that discriminatively trained dependency parsers rely heavily on partofspeech tagging featuresfor example when training and testing a parser on our english data a parser with all features obtains an uas of 8933 whereas a delexicalized parser a parser that only has nonlexical features obtains an uas of 825the key observation is that partofspeech tags contain a significant amount of information for unlabeled dependency parsingthis observation combined with our universal partofspeech tagset leads to the idea of direct transfer ie directly parsing the target language with the source language parser without relying on parallel corporathis idea has been previously explored by zeman and resnik and recently by søgaard because we use a mapping of the treebank specific partofspeech tags to a common tagset the performance of a such a system is easy to measure simply parse the target language data set with a delexicalized parser trained on the source language datawe conducted two experimentsin the first we assumed that the test set for each target language had gold partofspeech tags and in the second we used predicted partofspeech tags from the projection tagger of das and petrov which also uses english as the source languageuas for all sentence lengths without punctuation are given in table 1we report results for both the english direct transfer parser as well as a baseline unsupervised grammar induction system the dependency model with valence of klein and manning as obtained by the implementation of ganchev et al we trained on sentences of length 10 or less and evaluated on all sentences from the test set4 for dmv we reversed the direction of all dependencies if this led to higher performancefrom this table we can see that direct transfer is a very strong baseline and is over 20 absolute better than the dmv model for both gold and predicted pos tagstable 4 which we will discuss in more detail later further shows that the direct transfer parser also significantly outperforms stateoftheart unsupervised grammar induction models but in a more limited setting of sentences of length less than 10direct transfer works for a couple of reasonsfirst partofspeech tags contain a significant amount of information for parsing unlabeled dependenciessecond this information can be transferred to some degree across languages and treebank standardsthis is because at least for indoeuropean languages there is some regularity in how syntax is expressed eg primarily svo prepositional etceven though there are some differences with respect to relative location of certain word classes strong headmodifier pos tag preferences can still help resolve these especially when no other viable alternatives are availableconsider for example an artificial sentence with a tag sequence verb noun adj det puncthe english parser still predicts that the noun and punc modify the verb and the adj and det modify the noun even though in the english data such noun phrases are unlikely5 unlike most language transfer systems for parsers the direct transfer approach does not rely on projecting syntax across aligned parallel corpora in this section we describe a simple mechanism for projecting from the direct transfer system using large amounts of parallel data in a similar vein to hwa et al ganchev et al smith and eisner inter aliathe algorithm is based on the work of hall et al for training extrinsic parser objective functions and borrows heavily from ideas in learning with weak supervision including work on learning with constraints and posterior regularization in our case the weak signals come from aligned source and target sentences and the agreement in their corresponding parses which is similar to posterior regularization or the bilingual view of smith and smith and burkett et al the algorithm is given in figure 2it starts by labeling a set of target language sentences with a parser which in our case is the direct transfer parser from the previous section next it uses these parsed target sentences to seed a new parser by training a parameter vector using the predicted parses as a gold standard via standard perceptron updates for j rounds this generates a parser that emulates the direct transfer parser but dp dependency parser ie dp x y that are considered good by some external metricthe algorithm then updates towards that outputin this case goodness is determined through the prespecified sentence alignment and how well the target language parse aligns with the english parseas a result the model will ideally converge to a state where it predicts target parses that align as closely as possible with the corresponding english parseshowever since we seed the learner with the direct transfer parser we bias the parameters to select parses that both align well and also have high scores under the direct transfer modelthis helps to not only constrain the search space at the start of learning but also helps to bias dependencies between words that are not part of the alignmentso far we have not defined the align function that is used to score potential parseslet a t t be an alignment where s is a word in the source sentence xs and t is similarly a word in the target sentence xt the notation t e a indicates two words are the ith aligned pair in awe define the align function to encode the direct correspondence assumption from hwa et al has now been lexicalized and is working in the space of target language sentencesnext the algorithm iterates over the sentences in the parallel corpusit parses the english sentence with an english parser it then uses the current target language parameter vector to create a kbest parse list for the target sentence from this list it selects the parse whose dependencies align most closely with the english parse via the prespecified alignment it then uses this selected parse as a proxy to the gold standard parse to update the parameters the intuition is simplethe parser starts with nonrandom accuracies by emulating the direct transfer model and slowly tries to induce better parameters by selecting parses from its kbest list the notation e y indicates that a dependency from head i to modifier j is in tree ythe align function rewards aligned headmodifier pairs and penalizes unaligned pairs when a possible alignment existsfor all other cases it is agnostic ie when one or both of the modifier or head are not alignedfigure 3 shows an example of aligned englishgreek sentences the english parse and a potential greek parsein this case the align function returns a value of 2this is because there are three aligned dependencies tookbook bookthe and fromjohnthese add 3 to the scorethere is one incorrectly aligned dependency the preposition mistakenly modifies the noun on the greek sidethis subtracts 1finally there are two dependencies that do not align the subject on the english side and a determiner to a proper noun on the greek sidethese do not effect the resultthe learning algorithm in figure 2 is an instance of augmentedloss training which is closely related to the constraint driven learning algorithms of chang et al in that work external constraints on output structures are used to help guide the learner to good parameter regionsin our model we use constraints drawn from parallel data exactly in the same mannersince posterior regularization is closely related to constraint driven learning this makes our algorithm also similar to the parser projection approach of ganchev et al there are a couple of differencesfirst we bias our model towards the direct transfer model which is already quite powerfulsecond our alignment constraints are used to select parses from a kbest list whereas in posterior regularization they are used as soft constraints on full model expectations during trainingthe latter is beneficial as the use of kbest lists does not limit the class of parsers to those whose parameters and search space decompose neatly with the dca loss functionan empirical comparison to ganchev et al is given in section 5results are given in table 1 under the column enprojfor all experiments we train the seedstage perceptron for 5 iterations and we use one hundred times as much parallel data as seed stage nonparallel data the seedstage nonparallel data is the training portion of each treebank stripped of all dependency annotationsafter training the projected parser we average the parameters of the model the parsers evaluated using predicted partofspeech tags use the predicted tags at both training and testing time and are thus free of any target language specific resourceswhen compared with the direct transfer model we can see that there is an improvement for every single language reducing relative error by 8 on average and up to 18 for dutch one could wonder whether the true power of the projection model comes from the relexicalization step lines 36 of the algorithmhowever if just this step is run then the average uas only increases from 570 to 574 showing that most of the improvement comes from the projection stagenote that the results in table 1 indicate that parsers using predicted partofspeech tags are only slightly worse than the parsers using gold tags showing that these methods are robust to tagging errorsthe previous section focused on transferring an english parser to a new target languagehowever there are over 20 treebanks available for a variety of language groups including indoeuropean altaic semitic and sinotibetanmany of these are even in standardized formats past studies have shown that for both partofspeech tagging and grammar induction learning with multiple comparable languages leads to improvements in this section we examine whether this is also true for parser transfertable 2 shows the matrix of sourcetarget language uas for all nine languages we consider we can see that there is a wide range from 333 to 747there is also a wide range of values depending on the source training data andor target testing data eg portuguese as a source tends to parse target languages much better than danish and is also more amenable as a target testing languagesome of these variations are expected eg the romance languages tend to transfer well to one anotherhowever some are unexpected eg greek being the best source language for dutch as well as german being one of the worstthis is almost certainly due to different annotation schemes across treebanksoverall table 2 does indicate that there are possible gains in accuracy through the inclusion of additional languagesin order to take advantage of treebanks in multiple languages our multisource system simply concatenates the training data from all nontarget languagesin other words the multisource direct transfer parser for danish will be trained by first concatenating the training corpora of the remaining eight languages training a delexicalized parser on this data and then directly using this parser to analyze the danish test datafor the multisource projected parser the procedure is identical to that in section 32 except that we use the multisource direct transfer model to seed the algorithm instead of the englishonly direct transfer modelfor these experiments we still only use englishtarget parallel data because that is the format of the readily available data in the europarl corpustable 3 presents four sets of resultsthe first is the direct transfer results for the oracle singlebest source language per target languagethe second is the mean uas over all source languages per target languagethe third is the multisource direct transfer systemthe fourth and final result set is the multisource projected systemthe resulting parsers are typically much more accurate than the english direct transfer system on average the multisource direct transfer system reduces errors by 10 relative over the englishonly direct transfer systemthese improvements are not consistentfor greek and dutch we see significant losses relative to the englishonly systeman inspection of table 2 shows that for these two languages english is a particularly good source training languagefor the multisource projected system the results are mixedsome languages see basically no change relative the multisource direct transfer model while some languages see modest to significant increasesbut again there is an overall trend to better modelsin particular starting with an englishonly direct transfer parser with 570 uas on average by adding parallel corpora and multiple source languages we finish with parser having 638 uas on average which is a relative reduction in error of roughly 16 and more than doubles the performance of a dmv model interestingly the multisource systems provide on average accuracies near that of the singlebest source language and significantly better than the average source uasthus even this simple method of multisource transfer already provides strong performance gainswe expect that more principled techniques will lead to further improvementsfor example recent work by søgaard explores data set subsampling methodsunlike our work søgaard found that simply concatenating all the data led to degradation in performancecohen et al explores the idea learning language specific mixture coefficients for models trained independently on the target language treebankshowever their results show that this method often did not significantly outperform uniform mixingcomparing unsupervised and parser projection systems is difficult as many publications use nonoverlapping sets of languages or different evaluation criteriawe compare to the following three systems that do not augment the treebanks and report results for some of the languages that we considered naseem et al in which manually defined universal syntactic rules are used to constrain a probabilistic bayesian modelin addition to their original results we also report results using the same partofspeech tagset as the systems described in this paper this is useful for two reasonsfirst it makes the comparison more directsecond we can generate usr results for all eight languages and not just for the languages that they reporttable 4 gives results comparing the models presented in this work to those three systemsfor this comparison we use sentences of length 10 or less after punctuation has been removed in order to be consistent with reported resultsthe overall trends carry over from the full treebank setting to this reduced sentence length setup the projected models outperform the direct transfer models and multisource transfer gives higher accuracy than transferring only from englishmost previous work has assumed gold partofspeech tags but as the code for usr is publicly available we were able to train it using the same projected partofspeech tags used in our modelsthese results are also given in table 4 under usragain we can see that the multisource systems significantly outperform the unsupervised modelsit is not surprising that a parser transferred from annotated resources does significantly better than unsupervised systems since it has much more information from which to learnthe pr system of ganchev et al is similar to ours as it also projects syntax across parallel corporafor spanish we can see that the multisource direct transfer parser is better and this is also true for the multisource projected parser ganchev et al also report results for bulgarianwe trained a multisource direct transfer parser for bulgarian which obtained a score of 728 versus 678 for the pr systemif we only use english as a source language as in ganchev et al the english direct transfer model achieves 661 on bulgarian and 693 on spanish versus 678 and 706 for prin this setting the english projected model gets 720 on spanishthus under identical conditions the direct transfer model obtains accuracies comparable to pr6 another projection based system is that of smith and eisner who report results for german and spanish on sentences of length 15 and less inclusive of punctuationsmith and eisner use custom splits of the data and modify a subset of the dependenciesthe multisource projected parser obtains 719 for german and 678 for spanish on this setup7 if we cherrypick the source language the results can improve eg for spanish we can obtain 717 and 708 by directly transferring parsers form italian or portuguese respectivelyone fundamental point the above experiments illustrate is that even for languages for which no resources exist simple methods for transferring parsers work remarkably wellin particular if one can transfer partofspeech tags then a large part of transferring unlabeled dependencies has been solvedthis observation should lead to a new baseline in unsupervised and projected grammar induction the uas of a delexicalized english parserof course our experiments focus strictly on indoeuropean languagespreliminary experiments for arabic chinese and japanese suggest similar direct transfer methods are applicablefor example on the conll test sets a dmv model obtains uas of 287418346 for arzhja respectively whereas an english direct transfer parser obtains 321538322 and a multisource direct transfer parser obtains 399417433in this setting only indoeuropean languages are used as source datathus even across language groups direct transfer is a reasonable baselinehowever this is not necessary as treebanks are available for a number of language groups eg indoeuropean altaic semitic and sinotibetanthe second fundamental observation is that when available multiple sources should be usedeven through naive multisource methods it is possible to build a system that has comparable accuracy to the singlebest source for all languagesthis advantage does not come simply from having more datain fact if we randomly sampled from the multisource data until the training set size was equivalent to the size of the english data then the results still hold this suggests that even better transfer models can be produced by separately weighting each of the sources depending on the target language either weighting by hand if we know the language group of the target language or automatically if we do notas previously mentioned the latter has been explored in both søgaard and cohen et al we presented a simple yet effective approach for projecting parsers from languages with labeled training data to languages without any labeled training datacentral to our approach is the idea of delexicalizing the models which combined with a standardized partofspeech tagset allows us to directly transfer models between languageswe then use a constraint driven learning algorithm to adapt the transferred parsers to the respective target language obtaining an additional 16 error reduction on average in a multisource settingour final parsers achieve stateoftheart accuracies on eight indoeuropean languages significantly outperforming previous unsupervised and projected systemsacknowledgements we would like to thank kuzman ganchev valentin spitkovsky and dipanjan das for numerous discussions on this topic and comments on earlier drafts of this paperwe would also like to thank shay cohen dipanjan das noah smith and anders søgaard for sharing early drafts of their recent related work
D11-1006
multisource transfer of delexicalized dependency parserswe present a simple method for transferring dependency parsers from source languages with labeled training data to target languages without labeled training datawe first demonstrate that delexicalized parsers can be directly transferred between languages producing significantly higher accuracies than unsupervised parserswe then use a constraint driven learning algorithm where constraints are drawn from parallel corpora to project the final parserunlike previous work on projecting syntactic resources we show that simple methods for introducing multiple source languages can significantly improve the overall quality of the resulting parsersthe projected parsers from our system result in stateoftheart performance when compared to previously studied unsupervised and projected parsing systems across eight different languageswe show that partofspeech tags contain significant amounts of information for unlabeled dependency parsingwe demonstrate an alternative to grammar induction by projecting reference parse trees from languages that have annotations to ones that are resourcepoortree banks in other languages can still serve as a kind of proxy for learning which features generally transfer useful in formationwe demonstrate that projecting from a single oracle chosen language can lead to good parsing performance
semisupervised recursive autoencoders for predicting sentiment distributions we introduce a novel machine learning framework based on recursive autoencoders for sentencelevel prediction of sentiment label distributions our method learns vector space representations for multiword phrases in sentiment prediction tasks these representations outperform other stateoftheart approaches on commonly used datasets such as movie reviews without using any predefined sentiment lexica or polarity shifting rules we also evaluate the models ability to predict sentiment distributions on a new dataset based on confessions from the experience project the dataset consists of personal user stories annotated with multiple labels which when aggregated form a multinomial distribution that captures emotional reactions our algorithm can more accurately predict distributions over such labels compared to several competitive baselines the ability to identify sentiments about personal experiences products movies etc is crucial to understand user generated content in social networks blogs or product reviewsdetecting sentiment in these data is a challenging task which has recently spawned a lot of interest current baseline methods often use bagofwords representations which cannot properly capture more complex linguistic phenomena in sentiment analysis for instance while the two phrases white blood cells destroying an infection and an infection destroying white blood cells have the same bagofwords representation the former is a positive reaction while the later is very negativemore advanced methods such as are first mapped into a semantic vector space then they are recursively merged by the same autoencoder network into a fixed length sentence representationthe vectors at each node are used as features to predict a distribution over sentiment labels2010 that can capture such phenomena use many manually constructed resources this limits the applicability of these methods to a broader range of tasks and languageslastly almost all previous work is based on single positivenegative categories or scales such as star ratingsexamples are movie reviews opinions customer reviews or multiple aspects of restaurants such a onedimensional scale does not accurately reflect the complexity of human emotions and sentimentsin this work we seek to address three issues instead of using a bagofwords representation our model exploits hierarchical structure and uses compositional semantics to understand sentiment our system can be trained both on unlabeled domain data and on supervised sentiment data and does not require any languagespecific sentiment lexica sorry hugs you rock teehee i understand wow just wow i walked into a parked car parsers etc rather than limiting sentiment to a positivenegative scale we predict a multidimensional distribution over several complex interconnected sentimentswe introduce an approach based on semisupervised recursive autoencoders which use as input continuous word vectorsfig1 shows an illustration of the model which learns vector representations of phrases and full sentences as well as their hierarchical structure from unsupervised textwe extend our model to also learn a distribution over sentiment labels at each node of the hierarchywe evaluate our approach on several standard datasets where we achieve stateofthe art performancefurthermore we show results on the recently introduced experience project dataset that captures a broader spectrum of human sentiments and emotionsthe dataset consists of very personal confessions anonymously made by people on the experience project website wwwexperienceprojectcomconfessions are labeled with a set of five reactions by other usersreaction labels are you rock tehee i understand sorry hugs and wow just wow for evaluation on this dataset we predict both the label with the most votes as well as the full distribution over the sentiment categorieson both tasks our model outperforms competitive baselinesa set of over 31000 confessions as well as the code of our model are available at wwwsocherorgafter describing the model in detail we evaluate it qualitatively by analyzing the learned ngram vector representations and compare quantitatively against other methods on standard datasets and the ep datasetour model aims to find vector representations for variablesized phrases in either unsupervised or semisupervised training regimesthese representations can then be used for subsequent taskswe first describe neural word representations and then proceed to review a related recursive model based on autoencoders introduce our recursive autoencoder and describe how it can be modified to jointly learn phrase representations phrase structure and sentiment distributionswe represent words as continuous vectors of parameterswe explore two settingsin the first setting we simply initialize each word vector x e right now by sampling it from a zero mean gaussian distribution x nthese word vectors are then stacked into a word embedding matrix l e right nowv where v is the size of the vocabularythis initialization works well in supervised settings where a network can subsequently modify these vectors to capture certain label distributionsin the second setting we pretrain the word vectors with an unsupervised neural language model these models jointly learn an embedding of words into a vector space and use these vectors to predict how likely a word occurs given its contextafter learning via gradient ascent the word vectors capture syntactic and semantic information from their cooccurrence statisticsin both cases we can use the resulting matrix of word vectors l for subsequent tasks as followsassume we are given a sentence as an ordered list of m wordseach word has an associated vocabulary index k into the embedding matrix which we use to retrieve the words vector representationmathematically this lookup operation can be seen as a simple projection layer where we use a binary vector b which is zero in all positions except at the kth index in the remainder of this paper we represent a sentence as an ordered list of these vectors this word representation is better suited to autoencoders than the binary number representations used in previous related autoencoder models such as the recursive autoassociative memory model or recurrent neural networks since sigmoid units are inherently continuouspollack circumvented this problem by having vocabularies with only a handful of words and by manually defining a threshold to binarize the resulting vectorsthe goal of autoencoders is to learn a representation of their inputsin this section we describe how to obtain a reduced dimensional vector representation for sentencesin the past autoencoders have only been used in setting where the tree structure was given aprioriwe review this setting before continuing with our model which does not require a given tree structurefig2 shows an instance of a recursive autoencoder applied to a given treeassume we are given a list of word vectors x as described in the previous section as well as a binary tree structure for this input in the form of branching triplets of parents with children each child can be either an input word vector xi or a nonterminal node in the treefor the example in fig2 we have the following triplets in order to be able to apply the same neural network to each pair of children the hidden representations yi have to have the same dimensionality as the xisgiven this tree structure we can now compute the parent representationsthe first parent vector y1 is computed from the children where we multiplied a matrix of parameters w e rnx2n by the concatenation of the two childrenafter adding a bias term we applied an elementwise activation function such as tanh to the resulting vectorone way of assessing how well this ndimensional vector represents its children is to try to reconstruct the children in a reconstruction layer during training the goal is to minimize the reconstruction errors of this input pairfor each pair we compute the euclidean distance between the original input and its reconstruction this model of a standard autoencoder is boxed in fig2now that we have defined how an autoencoder can be used to compute an ndimensional vector representation of two ndimensional children we can describe how such a network can be used for the rest of the treeessentially the same steps repeatnow that y1 is given we can use eq2 to compute y2 by setting the children to be again after computing the intermediate parent vector y2 we can assess how well this vector capture the content of the children by computing the reconstruction error as in eq4the process repeat until the full tree is constructed and we have a reconstruction error at each nonterminal nodethis model is similar to the raam model which also requires a fixed tree structurenow assume there is no tree structure given for the input vectors in xthe goal of our structureprediction rae is to minimize the reconstruction error of all vector pairs of children in a treewe define a as the set of all possible trees that can be built from an input sentence xfurther let t be a function that returns the triplets of a tree indexed by s of all the nonterminal nodes in a treeusing the reconstruction error of eq4 we compute we now describe a greedy approximation that constructs such a treegreedy unsupervised raefor a sentence with m words we apply the autoencoder recursivelyit takes the first pair of neighboring vectors defines them as potential children of a phrase concatenates them and gives them as input to the autoencoderfor each word pair we save the potential parent node p and the resulting reconstruction errorafter computing the score for the first pair the network is shifted by one position and takes as input vectors and again computes a potential parent node and a scorethis process repeats until it hits the last pair of words in the sentence next it selects the pair which had the lowest reconstruction error and its parent representation p will represent this phrase and replace both children in the sentence word listfor instance consider the sequence and assume the lowest etec was obtained by the pair after the first pass the new sequence then consists of the process repeats and treats the new vector p like any other input vectorfor instance subsequent states could be either or pboth states would then finish with a deterministic choice of collapsing the remaining two states into one parent to obtain or respectivelythe tree is then recovered by unfolding the collapsing decisionsthe resulting tree structure captures as much of the singleword information as possible but does not necessarily follow standard syntactic constraintswe also experimented with a method that finds better solutions to eq5 based on ckylike beam search algorithms but the performance is similar and the greedy version is much fasterweighted reconstructionone problem with simply using the reconstruction error of both children equally as describe in eq4 is that each child could represent a different number of previously collapsed words and is hence of bigger importance for the overall meaning reconstruction of the sentencefor instance in the case of one would like to give more importance to reconstructing p than x1we capture this desideratum by adjusting the reconstruction errorlet n1 n2 be the number of words underneath a current potential child we redefine the reconstruction error to be length normalizationone of the goals of raes is to induce semantic vector representations that allow us to compare ngrams of different lengthsthe rae tries to lower reconstruction error of not only the bigrams but also of nodes higher in the treeunfortunately since the rae computes the hidden representations it then tries to reconstruct it can just lower reconstruction error by making the hidden layer very small in magnitudeto prevent such undesirable behavior we modify the hidden layer such that the resulting parent representation always has length one after computing p as in eq2 we simply set p p pso far the rae was completely unsupervised and induced general representations that capture the semantics of multiword phrasesin this section we extend raes to a semisupervised setting in order to predict a sentence or phraselevel target distribution t1 one of the main advantages of the rae is that each node of the tree built by the rae has associated with it a distributed vector representation which could also be seen as features describing that phrasewe can leverage this representation by adding on top of each parent node a simple softmax layer to predict class distributions assuming there are k labels d e rk is a kdimensional multinomial distribution and p k1 dk 1fig3 shows such a semisupervised rae unitlet tk be the kth element of the multinomial target label distribution t for one entrythe softmax layers outputs are interpreted as conditional probabilities dk p hence the crossentropy error is 1for the binary label classification case the distribution is of the form 1 0 for class 1 and 0 1 for class 2using this crossentropy error for the label and the reconstruction error from eq6 the final semisupervised rae objective over pairs in a corpus becomes where we have an error for each entry in the training set that is the sum over the error at the nodes of the tree that is constructed by the greedy rae let θ b w b wlabel l be the set of our model parameters then the gradient becomes to compute this gradient we first greedily construct all trees and then derivatives for these trees are computed efficiently via backpropagation through structure because the algorithm is greedy and the derivatives of the supervised crossentropy error also modify the matrix w this objective is not necessarily continuous and a step in the gradient descent direction may not necessarily decrease the objectivehowever we found that lbfgs run over the complete training data to minimize the objective works well in practice and that convergence is smooth with the algorithm typically finding a good solution quicklythe error at each nonterminal node is the weighted sum of reconstruction and crossentropy errors the hyperparameter α weighs reconstruction and crossentropy errorwhen minimizing the crossentropy error of this softmax layer the error will backpropagate and influence both the rae parameters and the word representationsinitially words such as good and bad have very similar representationsthis is also the case for brown clusters and other methods that use only cooccurrence statistics in a small window around each wordwhen learning with positivenegative sentiment the word embeddings get modified and capture less syntactic and more sentiment informationin order to predict the sentiment distribution of a sentence with this model we use the learned vector representation of the top tree node and train a simple logistic regression classifierwe first describe the new experience project dataset results of standard classification tasks on this dataset and how to predict its sentiment label distributionswe then show results on other commonly used datasets and conclude with an analysis of the important parameters of the modelin all experiments involving our model we represent words using 100dimensional word vectorswe explore the two settings mentioned in sec21we compare performance on standard datasets when using randomly initialized word vectors or word vectors trained by the model of collobert and weston and provided by turian et al 2 these vectors were trained on an unlabeled corpus of the english wikipedianote that alternatives such as brown clusters are not suitable since they do not capture sentiment information and cannot be modified via backpropagationthe confessions section of the experience project website3 let us people anonymously write short personal stories or confessionsonce a story is on the site each user can give a single vote to one of five label categories the ep dataset has 31676 confession entries a total number of 74859 votes for the 5 labels above the average number of votes per entry is 24 for the five categories the numbers of votes are 14 81613 32510 073 30 844 5 801since an entry with less than 4 votes is not very well identified we train and test only on entries with at least 4 total votesthere are 6129 total such entriesthe distribution over total votes in the 5 classes is similar 022 02 011 037 01the average length of entries is 129 wordssome entries contain multiple sentencesin these cases we average the predicted label distributions from the sentencestable 1 shows statistics of this and other commonly used sentiment datasets table 2 shows example entries as well as gold and predicted label distributions as described in the next sectionscompared to other datasets the ep dataset contains a wider range of human emotions that goes far beyond positivenegative product or movie reviewseach item is labeled with a multinomial distribution over interconnected response categoriesthis is in contrast to most other datasets where several distinct aspects are rated independently but on the same scalethe topics range from generic happy statements daily clumsiness reports love loneliness to relationship abuse and suicidal notesas is evident from the total number of label votes the most common user reaction is one of empathy and an ability to relate to the authors experiencehowever some stories describe horrible scenarios that are not common and hence receive more offers of condolencein the following sections we show some examples of stories with predicted and true distributions but refrain from listing the most horrible experiencesfor all experiments on the ep dataset we split the data into train development and test data the first task for our evaluation on the ep dataset is to simply predict the single class that receives the most votesin order to compare our novel joint phrase representation and classifier learning framework to traditional methods we use the following baselines random since there are five classes this gives 20 accuracymost frequent selecting the class which most frequently has the most votes baseline 1 binary bow this baseline uses logistic regression on binary bagofword representations that are 1 if a word is present and 0 otherwisebaseline 2 features this model is similar to traditional approaches to sentiment classification in that it uses many handengineered resourceswe first used a spellchecker and wordnet to map words and their misspellings to synsets to reduce the total number of wordswe then replaced sentiment words with a sentiment category identifier using the sentiment lexica of the harvard inquirer and liwc lastly we used tfidf weighting on the bagofword representations and trained an svmkl predictedgold v entry 03 16 16 16 33 16 6 i reguarly shoplifti got caught once and went to jail but i have found that this was not a deterrenti do not buy groceries i do not buy school supplies for my kids i do not buy gifts for my kids we do not pay for movies and i do not buy most incidentals for the house 03 38 04 06 35 14 165 i am a very succesfull buissnes mani make good money but i have been addicted to crack for 13 yearsi moved 1 hour away from my dealers 10 years ago to stop using now i do not use daily but once a week usally friday nights i used to use 1 or 2 hundred a day now i use 4 or 5 hundred on a fridaymy problem is i am a funcational addict 05 14 28 14 28 14 7 hi there i am a guy that loves a girl the same old bloody storyi met her a while ago while studying she is so perfect so mature and yet so lonely i get to know her and she get ahold of me by opening her life to me and so did i with her she has been the first person male or female that has ever made that bond with me 07 27 18 00 45 09 11 be kissing you right now i should be wrapped in your arms in the dark but instead i have ruined everything i have piled bricks to make a wall where there never should have been one i feel an ache that i should not feel because i have never had you close enough we have never touched but i still feel as though a part of me is missing 05 23 dear love i just want to say that i am looking for youtonight i felt the urge to write and i am becoming more and more frustrated that i have not found you yeti am also tired of spending so much heart on an old dream 05 5 i wish i knew somone to talk to here06 24 i loved her but i screwed it upnow she is moved oni will never have her againi do not know if i will ever stop thinking about her06 5 i am 13 years old and i hate my father he is alwas geting drunk and dos not care about how it affects me or my sisters i want to care but the truthis i do not care if he dies 13 6 well i think hairy women are attractive 35 5 as soon as i put clothings on i will go down to dq and get a thin mint blizzardi need itit will make my soul feel a bit better 36 6 i am a 45 year old divoced woman and i have not been on a date or had any significant relationship in 12 yearsyes 12 yrs the sad thing is i am not some dried up old granny who is no longer interested in men i just cannot meet menwhat is wrong with me63 6 when i was in kindergarden i used to lock myself in the closet and eat all the candythen the teacher found out it was one of us and made us go two days without freetimeit might be a little late now but sorry guys it was me haha 92 4 my paper is due in less than 24 hours and i am still dancing round my roombaseline 3 word vectors we can ignore the rae tree structure and only train softmax layers directly on the pretrained words in order to influence the word vectorsthis is followed by an svm trained on the average of the word vectorswe also experimented with latent dirichlet allocation but performance was very lowtable 3 shows the results for predicting the class with the most voteseven the approach that is based on sentiment lexica and other resources is outperformed by our model by almost 3 showing that for tasks involving complex broadrange human sentiment the often used sentiment lexica lack in coverage and traditional bagofwords representations are not powerful enoughwe now turn to evaluating our distributionprediction approachin both this and the previous maximum label task we backprop using the gold multinomial distribution as a targetsince we maximize likelihood and because we want to predict a distribution that is closest to the distribution of labels that people would assign to a story we evaluate using kl divergence kl ei gi log where g is the gold distribution and p is the predicted onewe report the average kl divergence where a smaller value indicates better predictive powerto get an idea of the values of kl divergence predicting random distributions gives a an average of 12 in kl divergence predicting simply the average distribution in the training data give 083fig4 shows that our raebased model outperforms the other baselinestable 2 shows ep example entries with predicted and gold distributions as well as numbers of votesin order to compare our approach to other methods we also show results on commonly used sentiment datasets movie reviews4 and opinions5 we give statistical information on these and the ep corpus in table 1we compare to the stateoftheart system of a dependency tree based classification method that uses crfs with hidden variableswe use the same training and testing regimen as well as their baselines majority phrase voting using sentiment and reversal lexica rulebased reversal using a dependency tree bagoffeatures and their full treecrf modelas shown in table 4 our algorithm outperforms their approach on both datasetsfor the movie review data set we do not use any handdesigned lexicaan error analysis on the mpqa dataset showed several cases of single words which never occurred in the training setcorrectly classifying these instances can only be the result of having them in the original sentiment lexiconhence for the experiment on mpqa we added the same sentiment lexicon that used in their system to our training setthis improved accuracy from 860 to 864using the pretrained word vectors boosts performance by less than 1 compared to randomly initialized word vectors this shows that our method can work well even in settings with little training datawe visualize the semantic vectors that the recursive autoencoder learns by listing ngrams that give the highest probability for each polaritytable 5 shows such ngrams for different lengths when the rae is trained on the movie review polarity dataseton a 4core machine training time for the smaller corpora such as the movie reviews takes around 3 hours and for the larger ep corpus around 12 hours until convergencetesting of hundreds of movie reviews takes only a few secondsin this experiment we show how the hyperparameter α influences accuracy on the development set of one of the crossvalidation splits of the mr datasetthis parameter essentially tradeoff the supervised and unsupervised parts of the objectivefig5 shows that a larger focus on the supervised objective is important but that a weight of α 02 for the reconstruction error prevents overfitting and achieves the highest performanceautoencoders are neural networks that learn a reduced dimensional representation of fixedsize inputs such as image patches or bagofword representations of text documentsthey can be used to efficiently learn feature encodings which are useful for classificationrecently mirowski et al learn dynamic autoencoders for documents in a bagofwords format which like ours combine supervised and reconstruction objectivesthe idea of applying an autoencoder in a recursive setting was introduced by pollack pollacks recursive autoassociative memories are similar to ours in that they are a connectionst feedforward modelhowever raams learn vector representations only for fixed recursive data structures whereas our rae builds this recursive data structuremore recently introduced a linear modification to raams that is able to better generalize to novel combinations of previously seen constituentsone of the major shortcomings of previous applications of recursive autoencoders to natural language sentences was their binary word representation as discussed in sec21recently introduced a maxmargin framework based on recursive neural networks for labeled structure predictiontheir models are applicable to natural language and computer vision tasks such as parsing or object detectionthe current work is related in that it uses a recursive deep learning modelhowever rnns require labeled tree structures and use a supervised score at each nodeinstead raes learn hierarchical structures that are trying to capture as much of the the original word vectors as possiblethe learned structures are not necessarily syntactically plausible but can capture more of the semantic content of the word vectorsother recent deep learning methods for sentiment analysis include pang et al were one of the first to experiment with sentiment classificationthey show that simple bagofwords approaches based on naive bayes maxent models or svms are often insufficient for predicting sentiment of documents even though they work well for general topicbased document classificationeven adding specific negation words bigrams or partofspeech information to these models did not add significant improvementsother documentlevel sentiment work includes for further references see instead of document level sentiment classification analyze the contextual polarity of phrases and incorporate many well designed features including dependency treesthey also show improvements by first distinguishing between neutral and polar sentencesour model naturally incorporates the recursive interaction between context and polarity words in sentences in a unified framework while simultaneously learning the necessary features to make accurate predictionsother approaches for sentencelevel sentiment detection include most previous work is centered around a given sentiment lexicon or building one via heuristics manual annotation or machine learning techniques in contrast we do not require an initial or constructed sentiment lexicon of positive and negative wordsin fact when training our approach on documents or sentences it jointly learns such lexica for both single words and ngrams propose isotonic conditional random fields and differentiate between local sentencelevel and global documentlevel sentimentthe work of focuses on manually constructing several lexica and rules for both polar words and related contentword negators such as prevent cancer where prevent reverses the negative polarity of cancerlike our approach they capture compositional semanticshowever our model does so without manually constructing any rules or lexicarecently showed how to use a seed lexicon and a graph propagation framework to learn a larger sentiment lexicon that also includes polar multiword phrases such as once in a life timewhile our method can also learn multiword phrases it does not require a seed set or a large web graph introduced an approach based on crfs with hidden variables with very good performancewe compare to their stateoftheart systemwe outperform them on the standard corpora that we tested on without requiring external systems such as pos taggers dependency parsers and sentiment lexicaour approach jointly learns the necessary features and tree structurein multiaspect rating one finds several distinct aspects such as food or service in a restaurant and then rates them on a fixed linear scale such as 15 stars where all aspects could obtain just 1 star or all aspects could obtain 5 stars independentlyin contrast in our method a single aspect is predicted not in terms of a fixed scale but in terms of a multinomial distribution over several interconnected sometimes mutually exclusive emotionsa single story cannot simultaneously obtain a strong reaction in different emotional responses we presented a novel algorithm that can accurately predict sentencelevel sentiment distributionswithout using any handengineered resources such as sentiment lexica parsers or sentiment shifting rules our model achieves stateoftheart performance on commonly used sentiment datasetsfurthermore we introduce a new dataset that contains distributions over a broad range of human emotionsour evaluation shows that our model can more accurately predict these distributions than other modelswe gratefully acknowledge the support of the defense advanced research projects agency machine reading program under air force research laboratory prime contract nofa875009c0181any opinions findings and conclusion or recommendations expressed in this material are those of the author and do not necessarily reflect the view of darpa afrl or the us governmentthis work was also supported in part by the darpa deep learning program under contract number fa865010c7020we thank chris potts for help with the ep data set raymond hsu bozhi see and alan wu for letting us use their system as a baseline and jiquan ngiam quoc le gabor angeli and andrew maas for their feedback
D11-1014
semisupervised recursive autoencoders for predicting sentiment distributionswe introduce a novel machine learning framework based on recursive autoencoders for sentencelevel prediction of sentiment label distributionsour method learns vector space representations for multiword phrasesin sentiment prediction tasks these representations outperform other stateoftheart approaches on commonly used datasets such as movie reviews without using any predefined sentiment lexica or polarity shifting ruleswe also evaluate the models ability to predict sentiment distributions on a new dataset based on confessions from the experience projectthe dataset consists of personal user stories annotated with multiple labels which when aggregated form a multinomial distribution that captures emotional reactionsour algorithm can more accurately predict distributions over such labels compared to several competitive baselineswe introduce a semisupervised approach that uses recursive autoencoders to learn the hierarchical structure and sentiment distribution of a sentence
domain adaptation via pseudo indomain data selection we explore efficient domain adaptation for the task of statistical machine translation based on extracting sentences from a large generaldomain parallel corpus that are most relevant to the target domain these sentences may be selected with simple crossentropy based methods of which we present three as these sentences are not themselves identical the indomain data we call them these subcorpora 1 the size of the original can then used to train small domainadapted statistical machine translation systems which outperform systems trained on the entire corpus performance is further improved when we use these domainadapted models in combination with a true indomain model the results show that more training data is not always better and that best results are attained via proper domainrelevant data selection as well as combining inand generaldomain systems during decoding statistical machine translation system performance is dependent on the quantity and quality of available training datathe conventional wisdom is that more data is better the larger the training corpus the more accurate the model can bethe trouble is that except for the few allpurpose smt systems there is never enough training data that is directly relevant to the translation task at handeven if there is no formal genre for the text to be translated any coherent translation task will have its own argot vocabulary or stylistic preferences such that the corpus characteristics will necessarily deviate from any allencompassing model of languagefor this reason one would prefer to use more indomain data for trainingthis would empirically provide more accurate lexical probabilities and thus better target the task at handhowever parallel indomain data is usually hard to find1 and so performance is assumed to be limited by the quantity of domainspecific training data used to build the modeladditional parallel data can be readily acquired but at the cost of specificity either the data is entirely unrelated to the task at hand or the data is from a broad enough pool of topics and styles such as the web that any use this corpus may provide is due to its size and not its relevancethe task of domain adaptation is to translate a text in a particular domain for which only a small amount of training data is available using an mt system trained on a larger set of data that is not restricted to the target domainwe call this larger set of data a generaldomain corpus in lieu of the standard yet slightly misleading outofdomain corpus to allow a large uncurated corpus to include some text that may be relevant to the target domainmany existing domain adaptation methods fall into two broad categoriesadaptation can be done at the corpus level by selecting joining or weighting the datasets upon which the models are trainedit can be also achieved at the model level by combining multiple translation or language models together often in a weighted mannerwe explore both categories in this workfirst we present three methods for ranking the sentences in a generaldomain corpus with respect to an indomain corpusa cutoff can then be applied to produce a very smallyet useful subcorpus which in turn can be used to train a domainadapted mt systemthe first two data selection methods are applications of languagemodeling techniques to mt the third method is novel and explicitly takes into account the bilingual nature of the mt training corpuswe show that it is possible to use our data selection methods to subselect less than 1 of a large general training corpus and still increase translation performance by nearly 2 bleu pointswe then explore how best to use these selected subcorporawe test their combination with the indomain set followed by examining the subcorpora to see whether they are actually indomain outofdomain or something in betweenbased on this we compare translation model combination methodsfinally we show that these tiny translation models for model combination can improve system performance even further over the current standard way of producing a domainadapted mt systemthe resulting process is lightweight simple and effectivean underlying assumption in domain adaptation is that a generaldomain corpus if sufficiently broad likely includes some sentences that could fall within the target domain and thus should be used for trainingequally the generaldomain corpus likely includes sentences that are so unlike the domain of the task that using them to train the model is probably more harmful than beneficialone mechanism for domain adaptation is thus to select only a portion of the generaldomain corpus and use only that subset to train a complete systemthe simplest instance of this problem can be found in the realm of language modeling using perplexitybased selection methodsthe sentences in the generaldomain corpus are scored by their perplexity score according to an indomain language model and then sorted with only the lowest ones being retainedthis has been done for language modeling including by gao et al and more recently by moore and lewis the ranking of the sentences in a generaldomain corpus according to indomain perplexity has also been applied to machine translation by both yasuda et al and foster et al we test this approach with the difference that we simply use the source side perplexity rather than computing the geometric mean of the perplexities over both sides of the corpuswe also reduce the size of the training corpus far more aggressively than yasuda et als 50foster et al do not mention what percentage of the corpus they select for their irbaseline but they concatenate the data to their indomain corpus and report a decrease in performancewe both keep the models separate and reduce their sizea more general method is that of who assign a weight to each sentence in the large corpus and modify the empirical phrase counts accordinglyfoster et al further perform this on extracted phrase pairs not just sentenceswhile this soft decision is more flexible than the binary decision that comes from including or discarding a sentence from the subcorpus it does not reduce the size of the model and comes at the cost of computational complexity as well as the possibility of overfittingadditionally the most effective features of were found to be metainformation about the source documents which may not be availableanother perplexitybased approach is that taken by moore and lewis where they use the crossentropy difference as a ranking function rather than just crossentropywe apply this criterion for the first time to the task of selecting training data for machine translation systemswe furthermore extend this idea for mtspecific purposesin addition to improving the performance of a single general model with respect to a target domain there is significant interest in using two translation models one trained on a larger generaldomain corpus and the other on a smaller indomain corpus to translate indomain textafter all if one has access to an indomain corpus with which to select data from a generaldomain corpus then one might as well use the indomain data toothe expectation is that the larger generaldomain model should dominate in regions where the smaller indomain model lacks coverage due to sparse ngram countsin practice most practical systems also perform targetside language model adaptation we eschew this in order to isolate the effects of translation model adaptation alonedirectly concatenating the phrase tables into one larger one is not strongly motivated identical phrase pairs within the resulting table can lead to unpredictable behavior during decodingnakov handled identical phrase pairs by prioritizing the source tables however in our experience identical entries in phrase tables are not very common when comparing across domainsfoster and kuhn interpolated the in and generaldomain phrase tables together assigning either linear or loglinear weights to the entries in the tables before combining overlapping entries this is now standard practicelastly koehn and schroeder reported improvements from using multiple decoding paths to pass both tables to the moses smt decoder instead of directly combining the phrase tables to perform domain adaptationin this work we directly compare the approaches of and on the systems generated from the methods mentioned in section 21we conducted our experiments on the international workshop on spoken language translation chinesetoenglish dialog task 2 consisting of transcriptions of conversational speech in a travel settingtwo corpora are needed for the adaptation taskour indomain data consisted of the iwslt corpus of approximately 30000 sentences in chinese and englishour generaldomain corpus was 12 million parallel sentences comprising a variety of publicly available datasets web data and private translation textsboth the in and generaldomain corpora were identically segmented and tokenized but otherwise unprocessedwe evaluated our work on the 2008 iwslt spontaneous speech challenge task3 test set consisting of 504 chinese sentences with 7 english reference translations eachthis is the most recent iwslt test set for which the reference translations are availablein order to highlight the data selection work we used an outofthebox moses framework using giza and mert to train and tune the machine translation systemsthe only exception was the phrase table for the large outofdomain system trained on 12m sentence pairs which we trained on a cluster using a worddependent hmmbased alignment we used the moses decoder to produce all the system outputs and scored them with the nist mteval31a 4 tool used in the iwslt evalutationour work depends on the use of language models to rank sentences in the training corpus in addition to their normal use during machine translation tuning and decodingwe used the sri language modeling toolkit was used for lm training in all cases corpus selection mt tuning and decodingwe constructed 4gram language models with interpolated modified kneserney discounting and set the goodturing threshold to 1 for trigramsthe indomain baseline consisted of a translation system trained using moses as described above on the iwslt corpusthe resulting model had a phrase table with 515k entriesthe generaldomain baseline was substantially larger having been trained on 12 million sentence pairs and had a phrase table containing 15 billion entriesthe bleu scores of the baseline singlecorpus systems are in table 1we present three techniques for ranking and selecting subsets of a generaldomain corpus with an eye towards improving overall translation performanceas mentioned in section 21 one established method is to rank the sentences in the generaldomain corpus by their perplexity score according to a language model trained on the small indomain corpusthis reduces the perplexity of the generaldomain corpus with the expectation that only sentences similar to the indomain corpus will remainwe apply the method to machine translation even though perplexity reduction has been shown to not correlate with translation performance for this work we follow the procedure of moore and lewis which applies the cosmetic change of using the crossentropy rather than perplexitythe perplexity of some string s with empirical ngram distribution p given a language model q is where h is the crossentropy between p and qwe simplify this notation to just hi meaning the crossentropy of string s according to a language model lmi which has distribution qselecting the sentences with the lowest perplexity is therefore equivalent to choosing the sentences with the lowest crossentropy according to the indomain language modelfor this experiment we used a language model trained on the chinese side of the iwslt corpusmoore and lewis also start with a language model lmi over the indomain corpus but then further construct a language model lmo of similar size over the generaldomain corpusthey then rank the generaldomain corpus sentences using and again taking the lowestscoring sentencesthis criterion biases towards sentences that are both like the indomain corpus and unlike the average of the generaldomain corpusfor this experiment we reused the indomain lm from the previous method and trained a second lm on a random subset of 35k sentences from the chinese side of the general corpus except using the same vocabulary as the indomain lmin addition to using these two monolingual criteria for mt data selection we propose a new method that takes in to account the bilingual nature of the problemto this end we sum crossentropy difference over each side of the corpus both source and target hisrchosrchitgthotgt again lower scores are presumed to be betterthis approach reuses the sourceside language models from section 42 but requires similarlytrained ones over the english sideagain the vocabulary of the language model trained on a subset of the generaldomain corpus was restricted to only cover those tokens found in the indomain corpus following moore and lewis the baseline results show that a translation system trained on the generaldomain corpus outperforms a system trained on the indomain corpus by over 3 bleu pointshowever this can be improved furtherwe used the three methods from section 4 to identify the bestscoring sentences in the generaldomain corpuswe consider three methods for extracting domaintargeted parallel data from a general corpus sourceside crossentropy sourceside crossentropy difference from and bilingual crossentropy difference which is novelregardless of method the overall procedure is the sameusing the scoring method we rank the individual sentences of the generaldomain corpus select only the top n we used the top n 35k 70k 150k sentence pairs out of the 12 million in the general corpus 5the net effect is that of domain adaptation via threshhold filteringnew mt systems were then trained solely on these small subcorpora and compared against the baseline model trained on the entire 12msentence generaldomain corpustable 2 contains bleu scores of the systems trained on subsets of the general corpusall three methods presented for selecting a subset of the generaldomain corpus could be used to train a stateoftheart machine translation systemthe simplest method using only the sourceside crossentropy was able to outperform the generaldomain model when selecting 150k out of 12 million sentencesthe other monolingual method sourceside crossentropy difference was able to perform nearly as well as the generaldomain model with only 35k sentencesthe bilingual moorelewis method proposed in this paper works best consistently boosting performance by 18 bleu while using less than 1 of the available training datathe results in table 2 show that all three methods can extract subsets of the generaldomain corpus that are useful for the purposes of statistical machine translationit is tempting to describe these as methods for finding indomain data hidden in a generaldomain corpusalas this does not seem to be the casewe trained a baseline language model on the indomain data and used it to compute the perplexity of the same heldout dev set used to tune the translation modelswe extracted the top n sentences using each ranking method varying n from 10k to 200k and then trained language models on these subcorporathese were then used to also compute the perplexity of the same heldout dev set shown below in figure 1topranked generaldomain sentences the perplexity of the dev set according to lms trained on the topranked sentences varied from 77 to 120 depending on the size of the subset and the method usedthe crossentropy method was consistently worse than the others with a best perplexity of 994 on 20k sentences and bilingual moorelewis was consistently the best with a lowest perplexity of 768and yet none of these scores are anywhere near the perplexity of 3696 according to the lm trained only on indomain datafrom this it can be deduced that the selection methods are not finding data that is strictly indomainrather they are extracting pseudo indomain data which is relevant but with a differing distribution than the original indomain corpusas further evidence consider the results of concatenating the indomain corpus with the best extracted subcorpora shown in table 3the change in both the dev and test scores appears to reflect dissimilarity in the underlying datawere the two datasets more alike one would expect the models to reinforce each other rather than cancel outbecause the pseudo indomain data should be kept separate from the indomain data one must train multiple translation models in order to advantageously use the generaldomain corpuswe now examine how best to combine these modelsa common approach to managing multiple translation models is to interpolate them as in and we tested the linear interpolation of the in and generaldomain translation models as follows given one model which assigns the probability p1 to the translation of source string s into target string t and a second model which assigns the probability p2 to the same event then the interpolated translation probability is here a is a tunable weight between 0 and 1 which we tested in increments of 01linear interpolation of phrase tables was shown to improve performance over the individual models but this still may not be the most effective use of the translation modelswe next tested the approach in passing the two phrase tables directly to the decoder and tuning a system using both phrase tables in paralleleach phrase table receives a separate set of weights during tuning thus this combined translation model has more parameters than a normal singletable systemunlike we explicitly did not attempt to resolve any overlap between the phrase tables as there is no need to do so with the multiple decoding pathsany phrase pairs appearing in both models will be treated separately by the decoderhowever the exact overlap between the phrase tables was tiny minimizing this effecttable 4 shows baseline results for the indomain translation system and the generaldomain system evaluated on the indomain datathe table also shows that linearly interpolating the translation models improved the overall bleu score as expectedhowever using multiple decoding paths and no explicit model merging at all produced even better results by 2 bleu points over the best individual model and 13 bleu over the best interpolated model which used a 09we conclude that it can be more effective to not attempt translation model adaptation directly and instead let the decoder do the workwe presented in section 5 several methods to improve the performance of a single generaldomain translation system by restricting its training corpus on an informationtheoretic basis to a very small number of sentenceshowever section 63 shows that using two translation models over all the available data outperforms any single individual translation model so far albeit only slightlyit is well and good to use the indomain data to select pseudo indomain data from the generaldomain corpus but given that this requires access to an indomain corpus one might as well use itas such we used the indomain translation model alongside translation models trained on the subcorpora selected using the moorelewis and bilingual moorelewis methods in section 4the results are in table 5a translation system trained on a pseudo indomain subset of the general corpus selected with the bilingual moorelewis method can be further improved by combining with an indomain modelfurthermore this system combination works better than the conventional multimodel approach by up to 07 bleu on both the dev and test setsthus a domainadapted system comprising two phrase tables trained on a total of 180k sentences outperformed the standard multimodel system which was trained on 12 million sentencesthis tiny combined system was also 3 points better than the generaldomain system by itself and 6 points better than the indomain system alonesentence pairs from a generaldomain corpus that seem similar to an indomain corpus may not actually represent the same distribution of language as measured by language model perplexitynonetheless we have shown that relatively tiny amounts of this pseudo indomain data can prove more useful than the entire generaldomain corpus for the purposes of domaintargeted translation tasksthis paper has also explored three simple yet effective methods for extracting these pseudo indomain sentences from a generaldomain corpusa translation model trained on any of these subcorpora can be comparable or substantially better than a translation system trained on the entire corpusin particular the new bilingual moorelewis method which is specifically tailored to the machine translation scenario is shown to be more efficient and stable for mt domain adaptationtranslation models trained on data selected in this way consistently outperformed the generaldomain baseline while using as few as 35k out of 12 million sentencesthis fast and simple technique for discarding over 99 of the generaldomain training corpus resulted in an increase of 18 bleu pointswe have also shown in passing that the linear interpolation of translation models may work less well for translation model adaptation than the multiple paths decoding technique of these approaches of data selection and model combination can be stacked resulting in a compact two phrasetable translation system trained on 1 of the available data that again outperforms a stateoftheart translation system trained on all the databesides improving translation performance this work also provides a way to mine very large corpora in a computationallylimited environment such as on an ordinary computer or perhaps a mobile devicethe maximum size of a useful generaldomain corpus is now limited only by the availability of data rather than by how large a translation model can be fit into memory at once
D11-1033
domain adaptation via pseudo indomain data selectionwe explore efficient domain adaptation for the task of statistical machine translation based on extracting sentences from a large generaldomain parallel corpus that are most relevant to the target domainthese sentences may be selected with simple crossentropy based methods of which we present threeas these sentences are not themselves identical to the indomain data we call them pseudo indomain subcorporathese subcorpora 1 the size of the original can then used to train small domainadapted statistical machine translation systems which outperform systems trained on the entire corpusperformance is further improved when we use these domainadapted models in combination with a true indomain modelthe results show that more training data is not always better and that best results are attained via proper domainrelevant data selection as well as combining in and generaldomain systems during decodingwe improve the perplexity based approach and propose bilingual cross entropy difference as a ranking function with in and generaldomain language modelswe propose a bilingual crossentropy difference to select data from parallel corpus for domain adaptation which captures the contextual information slightly and outperform monolingual crossentropy difference
divide and conquer crowdsourcing the creation of crosslingual textual entailment corpora we address the creation of crosslingual textual entailment corpora by means of crowdsourcing our goal is to define a cheap and replicable data collection methodology that minimizes the manual work done by expert annotators without resorting to preprocessing tools or already annotated monolingual datasets in line with recent works emphasizing the need of largescale annotation efforts for textual entailment our work aims to the scarcity of data available to train evaluate systems and the recourse to crowdsourcing as an effective way to reduce the costs of data collection without sacrificing quality we show that a complex data creation task for which even experts usually feature low agreement scores can be effectively decomposed into simple subtasks assigned to nonexpert annotators the resulting dataset obtained from a pipeline of different jobs routed to amazon mechanical turk contains more than 1600 aligned pairs for each combination of textshypotheses in english italian and german crosslingual textual entailment has been recently proposed by as an extension of textual entailment the task consists of deciding given a text and an hypothesis in different languages if the meaning of h can be inferred from the meaning of t as in other nlp applications both for monolingual and crosslingual tethe availability of large quantities of annotated data is an enabling factor for systems development and evaluationuntil now however the scarcity of such data on the one hand and the costs of creating new datasets of reasonable size on the other have represented a bottleneck for a steady advancement of the state of the artin the last few years monolingual te corpora for english and other european languages have been created and distributed in the framework of several evaluation campaigns including the rte challenge1 the answer validation exercise at clef2 and the textual entailment task at evalita3despite the differences in the design of the tasks all the released datasets were collected through similar procedures always involving expensive manual work done by expert annotatorsmoreover in the data creation process large amounts of handcrafted th pairs often have to be discarded in order to retain only those featuring full agreement in terms of the assigned entailment judgements among multiple annotatorsthe amount of discarded pairs is usually high contributing to increase the costs of creating textual entailment datasets4the issues related to the shortage of datasets and the high costs for their creation are more evident in the clte scenario where i the only dataset currently available is an englishspanish corpus obtained by translating the rte3 corpus and ii the application of the standard methods adopted to build rte pairs requires proficiency in multiple languages thus significantly increasing the costs of the data creation processto address these issues in this paper we devise a costeffective methodology to create crosslingual textual entailment corporain particular we focus on the following problems is it possible to collect th pairs minimizing the intervention of expert annotatorsto address this question we explore the feasibility of crowdsourcing the corpus creation processas a contribution beyond the few works on teclte data acquisition we define an effective methodology that i does not involve experts in the most complex stages of the process ii does not require preprocessing tools and iii does not rely on the availability of already annotated rte corpora to nonexperts difficult to accomplish and not suitable for the application of the qualitycheck mechanisms provided by current crowdsourcing servicesour divide and conquer solution represents the first attempt to address a complex task involving content generation and labelling through the definition of a cheap and reliable pipeline of simple tasks which are easy to define accomplish and control guagesmoreover since the core monolingual tasks of the process are carried out by manipulating english texts we are able to address the very large community of english speaking workers with a considerable reduction of costs and execution timefinally as a byproduct of our method the acquired pairs are fully aligned for all language combinations thus enabling meaningful comparisons between scenarios of different complexity we believe that in the same spirit of recent works promoting largescale annotation efforts around entailment corpora the proposed approach and the resulting dataset5 will contribute to meeting the strong need for resources to develop and evaluate novel solutions for textual entailmentcrowdsourcing services such as amazon mechanical turk6 and crowdflower7 have been recently used with success for a variety of nlp applications the idea is that the acquisition and annotation of large amounts of data needed to train and evaluate nlp tools can be carried out in a costeffective manner by defining simple human intelligence tasks routed to a crowd of nonexpert workers hired through online marketplacesas regards textual entailment the first work exploring the use of crowdsourcing services for data annotation is described in which shows high agreement between nonexpert annotations of the rte1 dataset and existing gold standard labels assigned by expert labellersfocusing on the actual generation of monolingual entailment pairs experiments the use of mturk to collect facts and counter facts related to texts extracted from an existing rte corpus annotated with named entitiestaking a step beyond the task of annotating existing datasets and showing the feasibility of involving nonexperts also in the generation of te pairs this approach is more relevant to our objectiveshowever at least two major differences with our work have to be remarkedfirst they still use available rte data to obtain a monolingual te corpus whereas we pursue the more ambitious goal of generating from scratch aligned clte corpora for different language combinationsto this aim we do not resort to already annotated data nor languagespecific preprocessing toolssecond their approach involves qualitative analysis of the collected data only a posteriori after manual removal of invalid and trivial generated hypothesesin contrast our approach integrates quality control mechanisms at all stages of the data collectionannotation process thus minimizing the recourse to experts to check the quality of the collected materialrelated research in the clte direction is reported in which describes the creation of an englishspanish corpus obtained from the rte3 dataset by translating the english hypotheses into spanishtranslations have been crowdsourced adopting a methodology based on translationvalidation cycles defined as separate hitsalthough simplifying the clte corpus creation problem which is recast as the task of translating already available annotated data this solution is relevant to our work for the idea of combining gold standard units and validation hits as a way to control the quality of the collected data at runtimethe design of data acquisition hits has to take into account several factors each having a considerable impact on the difficulty of instructing the workers the quality and quantity of the collected data the time and overall costs of the acquisitiona major distinction has to be made between jobs requiring data annotation and those involving content generationin the former case turkers are presented with the task of labelling input data referring to a fixed set of possible values in the latter case turkers are faced with creative tasks consisting in the production of textual material the ease of controlling the quality of the acquired data depends on the nature of the jobfor annotation jobs quality control mechanisms can be easily set up by calculating turkers agreement by applying voting schemes or by adding hidden gold units to the data to be annotated8in contrast the quality of the results of content generation jobs is harder to assess due to the fact that multiple valid results are acceptable in such situations the standard quality control mechanisms are not directly applicable and the detection of errors requires either costly manual verification at the end of the acquisition process or more complex and creative solutions integrating hits for quality checkmost of the approaches to content generation proposed so far rely on post hoc verification to filter out undesired lowquality data the few solutions integrating validation hits address the translation of single sentences a task that is substantially different from ours compared to sentence translation the task of creating clte pairs is both harder to explain without recurring to notions that are difficult to understand to nonexperts and harder to execute without mastering these notionsto tackle these issues the divide and conquer approach described in the next section consists in the decomposition of a difficult content generation job into easier subtasks that are i selfcontained and easy to explain ii easy to execute without any nlp expertise and iii suitable for the integration of a variety of runtime control mechanisms able to ensure a good quality of the collected material8both mturk and crowdflower provide means to check workers reliability and weed out untrusted ones without money wastethese include different types of qualification mechanisms the possibility of giving work only to known trusted turkers and the possibility of adding hidden gold standard units in the data to be annotated tions the execution of the two multilingual stages is not strictly necessary but depends on i the availability of parallel sentences to start the process and ii the actual objectives in terms of language combinations to be covered10as regards the first stage in this work we started from a set of 467 englishitaliangerman aligned sentences extracted from parallel documents downloaded from the cafebabel european magazine11concerning the second multilingual stage we performed only one round of translations from english to italian to extend the 3 combinations obtained without translations with the new language combinations itaita itaeng and itagerour approach builds on a pipeline of hits routed to mturks workforce through the crowdflower interfacethe objective is to collect aligned th pairs for different language combinations reproducing an rtelike annotation stylehowever our annotation is not limited to the standard rte framework where only unidirectional entailment from t to h is consideredas a useful extension we annotate any possible entailment relation between the two text fragments including i bidirectional entailment ii unidirectional entailment from t to h and iii unidirectional entailment from h to t the resulting pairs can be easily used to generate not only standard rte datasets9 but also generalpurpose collections featuring multidirectional entailment relationswe collect large amounts of clte pairs carrying out the most difficult part of the process at a monolingual levelstarting from a set of parallel sentences in n languages n entailment corpora are created one monolingual and n1 crosslingual the monolingual corpus is obtained by modifying the sentences only in one language original and modified sentences are then paired and annotated to form an entailment dataset for l1the clte corpora are obtained by combining the modified sentences in l1 with the original sentences in l2 and l3 and projecting to the multilingual pairs the annotations assigned to the monolingual pairsin principle only two stages of the process require crowdsourcing multilingual tasks but do not concern entailment annotationsthe first one at the beginning of the process aims to obtain a set of parallel sentences to start with and can be done in different ways the second one at the end of the process consists of translating the modified l1 sentences into other languages in order to extend the corpus to cover new language combina9with the positive examples drawn from bidirectional and unidirectional entailments from t to h and the negative ones drawn from unidirectional entailments from h to t the main steps of our corpus creation process depicted in figure 1 can be summarized as follows step1 sentence modificationthe original english sentences are modified through generation hits asking turkers to i preserve the meaning of the original sentences using different surface forms or ii slightly change their meaning by adding or removing contentour assumption in line with is that 10starting from parallel sentences in n languages the n corpora obtained without recurring to translations can be augmented by means of translation hits to create the full set of language combinationseach round of translation adds 1 monolingual corpus and n1 clte corpora another way to think about entailment is to consider whether one text t1 adds new information to the content of another text t if so then t is entailed by t1the result of this phase is a set of texts that can be of three types step2 te annotationentailment pairs composed of the original sentences and the modified ones are used as input of annotation hits asking turkers to decide which of the two texts contains more informationas a result each engeng1 pair is annotated as an example of unibidirectional entailment and stored in the monolingual english corpussince the original eng texts are aligned with the ita and ger texts the entailment annotations of engeng1 pairs can be projected to the other language pairs and the itaeng1 and gereng1 pairs are stored in the clte corpusthe possibility of projecting te annotations is based on the assumption that the semantic information is mostly preserved during the translation processthis particularly holds at the denotative level which is crucial to semantic inferenceat other levels there might be slight semantic variations which however are very unlikely to play a crucial role in determining entailment relationsstep3 translationthe modified sentences are translated into italian through generation hits reproducing the approach described in as a result three new datasets are produced by automatically projecting annotations the monolingual itaita1 and the crosslingual engita1 and gerita1since the solution adopted for sentence translation does not present novelty factors the remainder of this paper will omit further details on itinstead the following sections will focus on the more challenging tasks of sentence modification and te annotationsentence modification and te annotation have been decomposed into a pipeline of simpler monolingual english subtaskssuch pipeline depicted in figure 2 involves several types of generationannotation hits designed to be easily understandable to nonexpertseach hit consists of i a set of instructions for a specific task ii the data to be manipulated and iii a test to check workers reliabilityto cope with the quality control issues discussed in section 3 such tests are realized using gold standard units either hidden in the data to be annotated or defined as test questions that workers must correctly answer moreover regional qualifications are applied to all hitsas a further quality check all the annotation hits consider turkers agreement as a way to filter out low quality results the six hits defined for each subtask can be described as follows new sentence workers are asked to judge which of two given english sentences is more detailed4bremove information modify an english text to create a more general one by removing part of its contentas a reliability test before generating the new sentence workers are asked to judge which of two given english sentences is less detailedcide which of two english sentences provides more informationthese hits are combined in an iterative process that alternates text generation grammaticality check and entailment annotation stepsas a result for each original eng text we obtain multiple eng1 variants of the three types and in turn a set of annotated monolingual te pairsas described in section 41 the resulting monolingual english te corpus is used to create the following monocrosslingual te corporathis section provides a quantitative and qualitative analysis of the results of our corpus creation methodology focusing on the collected engeng1 monolingual datasetit has to be remarked that as an effect of the adopted methodology all the observations and the conclusions drawn hold for the collected clte corpora as welltable 1 provides some details about each step of the pipeline shown in figure 2for each hit the table presents i the number of items given in input ii the number of items produced as output iii the number of items discarded when the agreement threshold was not reached iv the number of entailment pairs added to the corpus v the time required by the mturk workforce to complete the job and vi the cost of the jobin hit1 1414 paraphrases were collected asking three different meaningpreserving modifications of each of the 467 original sentences12from a practical point of view such redundancy aims to ensure a sufficient number of grammatically correct and semantically equivalent modified sentencesfrom a theoretical point of view collecting many variants of a small pool of original sentences aims to create pairs featuring different entailment relations with similar superficial formsthis in principle should allow to obtain a dataset which requires te systems to focus more on deeper semantic phenomena than on the surface realization of the pairsthe collected paraphrases were sent as input to hit2 after this validation hit the number of acceptable paraphrases was reduced to 1326 the retained paraphrases were paired with their corresponding original sentences and sent to hit3 to be judged for semantic equivalencethe pairs marked as bidirectional entailments were divided in three groups 25 of the pairs were directly stored in the final corpus while the eng1 paraphrases of the remaining 75 were equally distributed to the next modification stepsin both hit4a and hit4b two new modified sentences were asked for each of the 452 paraphrases received as inputthe sentences collected in these generation tasks were respectively 916 and 923the new modified sentences were sent back to hit2 and hit3 as a result 1438 new pairs were created out of these 148 resulted to be bidirectional entailments and were stored in the corpusfinally the 1298 entailment pairs judged as nonbidirectional in the two previously completed hit3 were given as input to hit5 the pairs which passed the agreement threshold were classified according to the judgement received and stored in the corpus as unidirectional entailment pairsthe analysis of table 1 allows to formulate some considerationsfirst the percentage of discarded items confirms the effectiveness of decomposing complex generation tasks into simpler subtasks that integrate validation hits and quality checks based on nonexperts agreementin fact on average around 95 of the generated items were discarded without experts intervention13second the amount of discarded items gives evidence about the relative difficulty of each hitas expected we observe lower rejection rates corresponding to higher interannotator agreement for grammaticality hits than for more complex entailmentrelated tasks looking at costs and execution time it is hard to draw definite conclusions due to several factors that influence the progress of the crowdsourced jobs on the one hand as expected the more creative add info task proved to be more demanding than the remove info even though it was paid more 13moreover it is worthwhile noticing that around 20 of the collected items were automatically rejected due to failures on the gold standard controls created both for generation and annotation tasks14the payment for each hit was set on the basis of a previous feasibility study aimed at determining the best tradeoff between cost and execution timehowever replicating our approach would not necessarily result in the same costs it still took little more time to be completedon the other hand although the unidirectional entailment task was expected to be more difficult and thus rewarded more than the bidirectional entailment one in the end it took notably less time to be completednevertheless the overall figures 15 clearly demonstrate the effectiveness of the approacheven considering the time needed for an expert to manage the pipeline these figures show that our methodology provides a cheaper and faster way to collect entailment data in comparison with the rte average costs reported in section 1as regards the amount of data collected the resulting corpus contains 1620 pairs with the following distribution of entailment relations i 449 bidirectional entailments ii 491 engeng1 unidirectional entailments and iii 680 engeng1 unidirectional entailmentsit must be noted that our methodology does not lead to the creation of pairs where some information is provided in one text and not in the other and viceversa as example 1 shows eng new theories were emerging in the field of psychologyeng1 new theories were rising which announced a kind of veiled racismthese negative examples in both directions represent a natural extension of the dataset relevant also for specific applicationoriented scenarios and their creation will be addressed in future workbesides the achievement of our primary objectives the adopted approach led to some interesting byproductsfirst the generated corpora are perfectly suitable to produce entailment datasets similar to those used in the traditional rte evaluation frameworkin particular considering any possible entailment relation between two text fragments our annotation subsumes the one proposed in rte campaignsthis allows for the costeffective generation of rtelike annotations from the acquired cor15although by projecting annotations the eng1ita and eng1ger clte corpora came for free the ita1ita ita1eng and ita1ger combinations created by crowdsourcing translations added 45 usd and approximately 5 days to these figures pora by combining engeng1 and engeng1 pairs to form 940 positive examples keeping the 680 engeng1 as negative examplesmoreover by swapping eng and eng1 in the unidirectional entailment pairs 491 additional negative examples and 680 positive examples can be easily obtainedfinally the output of hits 123 in table 1 represents per se a valuable collection of 1205 paraphrasesthis suggests the great potential of crowdsourcing for paraphrase acquisitionthrough manual verification of more than 50 of the corpus a total number of 53 pairs were found incorrectthe different errors were classified as follows type 1 sentence modification errorsgeneration hits are a minor source of errors being responsible for 10 problematic pairsthese errors are either introduced by generating a false statement or by forming a not fully understandable awkward or nonnatural sentence type 2 te annotation errorsthe notion of containing moreless information used in the unidirectional entailment hit can mostly be applied straightforwardly to the entailment definitionhowever the concept of moreless detailed which generally works for factual statements in some cases is not applicablein fact the mturk workers have regularly interpreted the instructions about the amount of information as concerning the quantity of concepts contained in a sentencethis is not always corresponding to the actual entailment relation between the sentencesas a consequence 43 pairs featuring wrong entailment annotations were encounteredthese errors can be classified as follows a 13 pairs where the addedremoved information changes the meaning of the sentencein these cases the modified sentence was judged moreless specific than the original one leading to unidirectional entailment annotationon the contrary in terms of the standard entailment definition the correct annotation is no entailment these pairs were labelled as unidirectional entailments under the assumption that a proper name is more specific and informative than a pronounhowever adhering to the te definition coreferring expressions are equivalent and their realization does not play any role in the entailment decisionthis implies that the correct entailment annotation is bidirectional c 9 pairs where the sentences are semantically equivalent but contain a piece of information which is explicit in one sentence and implicit in the otherin these cases turkers judged the sentence containing the explicit mention as more specific and thus the pair was annotated as unidirectional entailmentin example 6 the expression the trigger in eng1 implicitly means the click of the trigger making the two sentences equivalent and the entailment bidirectional d 7 pairs where the information removed from or added to the sentence is not relevant to the entailment relationin these cases the modified sentence was judged lessmore specific than the original one even though the correct judgement is bidirectional as in e 4 pairs where the addedremoved information concerns universally quantified general statements about which the interpretation of moreless specific given by turkers resulted in the wrong annotationin example 8 the additional information restricts the set to which it refers making eng entailed by eng1 and not vice versa as resulted from turkers annotationin light of this analysis we conclude that the sentence modification methodology proved to be successful as the low number of type 1 errors showsconsidering that the most expensive phase in the creation of a te dataset is the generation of the pairs this is a significant achievementdifferently the entailment assessment phase appears to be more problematic accounting for the majority of errorsas shown by type 2 errors this is due to a partial misalignment between the instructions given in our hits and the formal definition of textual entailmentfor this reason further experimentation will explore different ways to instruct workers in order to reduce the amount of errors producedas a final remark considering that in the creation of a te dataset the manual check of the annotated pairs represents a minor cost even the involvement of experts to filter out wrong annotations would not decrease the costeffectiveness of the proposed methodologythere is an increasing need of annotated data to develop new solutions to the textual entailment problem explore new entailmentrelated tasks and set up experimental frameworks targeting realworld applicationsfollowing the recent trends promoting annotation efforts that go beyond the established rte challenge framework in this paper we addressed the multilingual dimension of the problemour primary goal was the creation of largescale collections of entailment pairs for different language combinationsbesides that we considered cost effectiveness and replicability as additional requirementsto achieve our objectives we developed a divide and conquer methodology based on crowdsourcingour approach presents several key innovations with respect to the related works on te data acquisitionthese include the decomposition of a complex content generation task in a pipeline of simpler subtasks accessible to a large crowd of nonexperts and the integration of quality control mechanisms at each stage of the processthe result of our work is the first largescale dataset containing both monolingual and crosslingual corpora for several combinations of textshypotheses in english italian and germanamong the advantages of our method it is worth mentioning i the full alignment between the created corpora ii the possibility to easily extend the dataset to new languages and iii the feasibility of creating generalpurpose corpora featuring multidirectional entailment relations that subsume the traditional rtelike annotationthis work has been partially supported by the ecfunded project cosyne the authors would like to thank emanuele pianta for the helpful discussions and giovanni moretti for the valuable support in the creation of the clte dataset
D11-1062
divide and conquer crowdsourcing the creation of crosslingual textual entailment corporawe address the creation of crosslingual textual entailment corpora by means of crowdsourcingour goal is to define a cheap and replicable data collection methodology that minimizes the manual work done by expert annotators without resorting to preprocessing tools or already annotated monolingual datasetsin line with recent works emphasizing the need of largescale annotation efforts for textual entailment our work aims to i tackle the scarcity of data available to train and evaluate systems and ii promote the recourse to crowdsourcing as an effective way to reduce the costs of data collection without sacrificing qualitywe show that a complex data creation task for which even experts usually feature low agreement scores can be effectively decomposed into simple subtasks assigned to nonexpert annotatorsthe resulting dataset obtained from a pipeline of different jobs routed to amazon mechanical turk contains more than 1600 aligned pairs for each combination of textshypotheses in english italian and german
tuning as ranking we offer a simple effective and scalable method for statistical machine translation parameter tuning based on the pairwise approach to ranking unlike the popular mert algorithm our pairwise ranking optimization method is not limited to a handful of parameters and can easily handle systems with thousands of features moreover unlike recent approaches built upon the mira algorithm of crammer and singer pro is easy to implement it uses offtheshelf linear binary classifier software and can be built on top of an existing mert framework in a matter of hours we establish pros scalability and effectiveness by comparing it to mert and mira and demonstrate parity on both phrasebased and syntaxbased systems in a variety of language pairs using large scale data scenarios the mert algorithm is currently the most popular way to tune the parameters of a statistical machine translation systemmert is wellunderstood easy to implement and runs quickly but can behave erratically and does not scale beyond a handful of featuresthis lack of scalability is a significant weakness as it inhibits systems from using more than a couple dozen features to discriminate between candidate translations and stymies feature development innovationseveral researchers have attempted to address this weaknessrecently watanabe et al and chiang et al have developed tuning methods using the mira algorithm as a nucleusthe mira technique of chiang et al has been shown to perform well on largescale tasks with hundreds or thousands of features however the technique is complex and architecturally quite different from merttellingly in the entire proceedings of acl 2010 only one paper describing a statistical mt system cited the use of mira for tuning while 15 used mert1 here we propose a simpler approach to tuning that scales similarly to highdimensional feature spaceswe cast tuning as a ranking problem where the explicit goal is to learn to correctly rank candidate translationsspecifically we follow the pairwise approach to ranking in which the ranking problem is reduced to the binary classification task of deciding between candidate translation pairsof primary concern to us is the ease of adoption of our proposed techniquebecause of this we adhere as closely as possible to the established mert architecture and use freely available machine learning softwarethe end result is a technique that scales and performs just as well as mirabased tuning but which can be implemented in a couple of hours by anyone with an existing mert implementationmindful that many wouldbe enhancements to the stateoftheart are false positives that only show improvement in a narrowly defined setting or with limited data we validate our claims on both syntax and phrasebased systems using multiple language pairs and large data setswe describe tuning in abstract and somewhat formal terms in section 2 describe the mert algorithm in the context of those terms and illustrate its scalability issues via a synthetic experiment in section 3 introduce our pairwise ranking optimization method in section 4 present numerous largescale mt experiments to validate our claims in section 5 discuss some related work in section 6 and conclude in section 7in figure 1 we show an example candidate space defined as a tuple where the example candidate space has two source sentences three candidate translations for each source sentence and feature vectors of dimension 2it is an example of a finite candidate space defined as a candidate space for which i is finite and j maps each index of i to a finite seta policy of candidate space is a function that maps each member i e i to a member of ja policy corresponds to a choice of one candidate translation for each source sentencefor the example in figure 1 policy p1 11 2 2 31 corresponds to the choice of he does not go for the first source sentence and i do not go for the second source sentenceobviously some policies are better than otherspolicy p2 11 3 2 11 corresponds to the inferior translations she not go and i go not we assume the mt system distinguishes between policies using a scoring function for candidate translations of the form hw w x where w is a weight vector of the same dimension as feature vector xthis scoring function extends to a policy p by summing the cost of each of the policys candidate translations hw eii hwas can be seen in figure 1 using w 2 1 hw 9 and hw 8the goal of tuning is to learn a weight vector w such that hw assigns a high score to good policies and a low score to bad policies2 to do so we need information about which policies are good and which are badthis information is provided by a gold scoring function g that maps each policy to a realvalued scoretypically this gold function is bleu though there are several common alternatives we want to find a weight vector w such that hw behaves similarly to g on a candidate space s we assume a loss function ls which returns the realvalued loss of using scoring function hw when the gold scoring function is g and the candidate space is s thus we may say the goal of tuning is to find the weight vector w that minimizes lossin general the candidate space may have infinitely many source sentences as well as infinitely many candidate translations per source sentencein practice tuning optimizes over a finite subset of source sentences3 and a finite subset of candidate translations as wellthe classic tuning architecture used in the dominant mert approach forms the translation subset and learns weight vector w via algorithm tune space s ha i j f e xi wrt gold function g a feedback loop consisting of two phasesfigure 2 shows the pseudocodeduring candidate generation candidate translations are selected from a base candidate space s and added to a finite candidate space s0 called the candidate poolduring optimization the weight vector w is optimized to minimize loss lsfor its candidate generation phase mert generates the kbest candidate translations for each source sentence according to hw where w is the weight vector from the previous optimization phase for its optimization phase mert defines the loss function as follows in other words it prefers weight vectors w such that the gold function g scores hws best policy as highly as possible typically the optimization phase is implemented using ochs line optimization algorithm mert has proven itself effective at tuning candidate spaces with low dimensionalityhowever it is often claimed that mert does not scale well with dimensionalityto test this claim we devised the following synthetic data experiment we used line optimization in the standard way by generating 20 random starting weight vectors and hillclimbing on each independently until no further progress is made then choosing the final weight vector that minimizes losswe tried various dimensionalities from 10 to 1000we repeated each setting three times generating different random data each timethe results in figure 3 indicate that as the dimensionality of the problem increases mert rapidly loses the ability to learn wnote that this synthetic problem is considerably easier than a real mt scenario where the data is noisy and interdependent and the gold scoring function is nonlinearif mert cannot scale in this simple scenario it has little hope of succeeding in a highdimensionality deployment scenariowe would like to modify mert so that it scales well to highdimensionality candidate spacesthe most prominent example of a tuning method that performs well on highdimensionality candidate spaces is the mirabased approach used by watanabe et al and chiang et alunfortunately this approach requires a complex architecture that diverges significantly from the mert approach and consequently has not been widely adoptedour goal is to achieve the same performance with minimal modification to mertwith mert as a starting point we have a choice modify candidate generation optimization or bothalthough alternative candidate generation methods have been proposed we will restrict ourselves to mertstyle candidate generation in order to minimize divergence from the established mert tuning architectureinstead we focus on the optimization phasewhile intuitive the mert optimization module focuses attention on hws best policy and not on its overall prowess at ranking policieswe will create an optimization module that directly addresses hws ability to rank policies in the hope that this more holistic approach will generalize better to unseen dataassume that the gold scoring function g decomposes in the following way where g is a local scoring function that scores the single candidate translation ewe show an example g in figure 1for an arbitrary pair of candidate translations e and e the local gold function g tells us which is the better translationnote that this induces a ranking on the candidate translations for each source sentencewe follow the pairwise approach to ranking in the pairwise approach the learning task is framed as the classification of candidate pairs into two categories correctly ordered and incorrectly orderedspecifically for candidate translation pair e and e we want g g hw hwwe can reexpress this condition thus optimization reduces to a classic binary classification problemwe create a labeled training instance for this problem by computing difference vector x x and labeling it as a positive or negative instance based on whether respectively the first or second vector is superior according to gold function g to ensure balance we consider both possible difference vectors from a pairfor example given the candidate space of figure 1 since g g we would add and to our training setwe can then feed this training data directly to any offtheshelf classification tool that returns a linear classifier in order to obtain a weight vector w that optimizes the above conditionthis weight vector can then be used directly by the mt system in the subsequent candidate generation phasethe exact loss function ls optimized depends on the choice of classifier4 typical approaches to pairwise ranking enumerate all difference vectors as training datafor tuning however this means o vectors where jmax is the cardinality of the largest jsince i and jmax commonly range in the thousands a full enumeration would produce billions of feature vectorsout of tractability considerations we sample from the space of difference vectors using the sampler template in figure 4for each source sentence i the sampler generates f candidate translation pairs hj ji and accepts each pair with probability αi gamong the accepted pairs it keeps the ξ with greatest g differential and adds their difference vectors to the training data5 we repeated the scalability study from section 3 now using our pairwise ranking optimization approachthroughout all experiments with pro we choose γ 5000 ξ 50 and the following step function α for each αz 6 we used megam iii 2004 as a binary classifier in our contrasting synthetic experiment and of the ie with all default settings for binary figure 3 shows that pro is able to learn nearly perfectly at all dimensionalities from 10 to 1000as noted previously though this is a rather simple taskto encourage a disconnect between g and h and make the synthetic scenario look more like obtained these parameters by trialanderror experimentation on a single mt system then held them fixed throughout our experimentswe obtained similar results using p 100 and for each a logistic sigmoid function centered at the mean g differential of candidate translation pairs for the ith source sentencethis alternative approach has the advantage of being agnostic about which gold scoring function is used the sampling settings previously described and megam as our classifier we were able to optimize two to three times faster than with but added noise to each feature vector drawn from a zeromean gaussian with a standard deviation of 500the results of the noisy synthetic experiments but still the idea of learning from difference vectors also lies at the heart of the mirabased approaches and the approach of roth et al which similar to our method uses sampling to select vectorshere we isolate these aspects of those approaches to create a simpler tuning technique that closely mirrors the ubiquitous mert architectureamong other simplifications we abstract away the choice of mira as the classification method and we eliminate the need for oracle translationsan important observation is that bleu does not satisfy the decomposability assumption of equation an advantage of mert is that it can directly optimize for nondecomposable scoring functions like bleuin our experiments we use the bleu1 approximation to bleu to determine class labelswe will nevert heless use bleu to evaluate the trained systemswe now turn to real machine translation conditions to validate our thesis we can cleanly replace merts line optimization with pairwise ranking optimization and immediately realize the benefits of highdimension tuningwe now detail the three language pairs two feature scenarios and two mt models used for our experimentsfor each language pair and each mt model we used mert mira and pro to tune with a standard set of baseline features and used the latter two methods to tune with an extended set of features8 at the end of every experiment we used the final feature weights to decode a heldout test set and evaluated it with casesensitive bleuthe results are in table 1we used two systems each based on a different mt modelour syntaxbased system follows the model of galley et al our 8mert could not run to a satisfactory completion in any extended feature scenario as implied in the synthetic data experiment of section 3 the algorithm makes poor choices for its weights and this leads to lowquality kbest lists and dismal performance near 0 bleu in every iteration phrasebased system follows the model of och and ney in both systems we learn alignments with giza using ibm model 4 for urduenglish and chineseenglish we merged alignments with the refined method and for arabicenglish we merged with the union methodtable 2 notes the sizes of the datasets used in our experimentsall tune and test data have four english reference sets for the purposes of scoringthe training data for urduenglish is that made available in the constrained track in the nist 2009 mt evaluationthis includes many lexicon entries and other singleword data which accounts for the large number of lines relative to word countthe nist 2008 evaluation set which contains newswire and web data is split into two parts we used roughly half each for tune and testwe trained a 5gram english language model on the english side of the training datathe training data for arabic english is that made available in the constrained track in the nist 2008 mt evaluationthe tune set which contains only newswire data is a mix from nist mt evaluation sets from 20032006 and from gale development datathe test set which contains both web and newswire data is the evaluation set from the nist 2008 mt evaluationwe trained a 4gram english language model on the english side of the training datafor chineseenglish we used 173m words of training data from gale 2008for sbmt we used a 32m word subset for extracting rules and building a language model but used the entire training data for alignments and for all pbmt trainingthe tune and test sets both contain web and newswire datathe tune set is selected from nist mt evaluation sets from 20032006the test set is the evaluation set from the nist 2008 mt evaluationwe trained a 3gram english language model on the english side of the training datafor each of our systems we identify two feature sets baseline which correspond to the typical small feature set reported in current mt literature and extended a superset of baseline which adds hundreds or thousands of featuresspecifically we use 15 baseline features for pbmt similar to the baseline features described by watanabe et al we use 19 baseline features for sbmt similar to the baseline features described by chiang et al we used the following feature classes in sbmt and pbmt extended scenarios we used the following feature classes in sbmt extended scenarios only section 4110 we used the following feature classes in pbmt extended scenarios only the feature classes and number of features used within those classes for each language pair are summarized in table 3each of the three approaches we compare in this study has various details associated with it that may prove useful to those wishing to reproduce our resultswe list choices made for the various tuning methods here and note that all our decisions were made in keeping with best practices for each algorithmwe used david chiangs cmert implementation of mert that is available with the moses system we ran mert for up to 30 iterations using k 1500 and stopping early when 11this constitutes 6723 features in principle but in practice far fewer cooccurrences were seentable 3 shows the number of actual unigram word pair features observed in data the accumulated kbest list does not change in an iterationin every tuning iteration we ran mert once with weights initialized to the last iterations chosen weight set and 19 times with random weights and chose the the best of the 20 ending points according to g on the development setthe g we optimize is tokenized lowercased 4gram bleu we for the most part follow the mira algorithm for machine translation as described by chiang et al 12 but instead of using the 10best of each of the best hw hw g and hwg we use the 30best according to hw13 we use the same sentencelevel bleu calculated in the context of previous 1best translations as chiang et alwe ran mira for 30 iterationswe used the megam classifier and sampled as described in section 42as previously noted we used bleu1 for g megam was easy to set up and ran fairly quickly however any linear binary classifier that operates on realvalued features can be used and in fact we obtained similar results using the support vector machine module of weka as well as the stanford classifier we ran for up to 30 iterations and used the same k and stopping criterion as was used for mert though variability of sampling precluded list convergencewhile mert and mira use each iterations final weights as a starting point for hillclimbing the next iteration the pairwise ranking approach has no explicit tie to previous iterationsto incorporate such stability into our process we interpolated the weights w learned by the classifier in iteration t with those from iteration t 1 by a factor of ψ such that wt ψ w wt1we found ψ 01 gave good performance across the boardwe implore the reader to avoid the natural tendency to compare results using baseline vs extended features or between pbmt and sbmt on the same language pairsuch discussions are indeed interesting and could lead to improvements in feature engineering or sartorial choices due to the outcome of wagers but they distract from our thesisas can be seen in table 1 for each of the 12 choices of system language pair and feature set the pro method performed nearly the same as or better than mira and mert on test datain figure 5 we show the tune and test bleu using the weights learned at every iteration for each urduenglish sbmt experimenttypical of the rest of the experiments we can clearly see that pro appears to proceed more monotonically than the other methodswe quantified pros stability as compared to mert by repeating the urduenglish baseline pbmt experiment five times with each configurationthe tune and test bleu at each iteration is depicted in figure 6the standard deviation of the final test bleu of mert was 013 across the five experiment instances while pro had a standard deviation of just 005several works have used discriminative techniques to rerank kbest lists for mttillmann and zhang used a customized form of multiclass stochastic gradient descent to learn feature weights for an mt modeloch and ney used maximum entropy to tune feature weights but did not compare pairs of derivationsittycheriah and roukos used a maximum entropy classifier to train an alignment model using handlabeled dataxiong et al also used a maximum entropy classifier in this case to train the reordering component of their mt modellattice and hypergraphbased variants of mert are more stable than traditional mert but also require significant engineering effortswe have described a simple technique for tuning an mt system that is on par with the leading techniques exhibits reliable behavior scales gracefully to highdimension feature spaces and is remarkably easy to implementwe have demonstrated via a litany of experiments that our claims are valid and that this technique is widely applicableit is our hope that the adoption of pro tuning leads to fewer headaches during tuning and motivates advanced mt feature engineering researchthanks to markus dreyer kevin knight saiyam kohli greg langmead daniel marcu dragos munteanu and wei wang for their assistancethanks also to the anonymous reviewers especially the reviewer who implemented pro during the review period and replicated our results
D11-1125
tuning as rankingwe offer a simple effective and scalable method for statistical machine translation parameter tuning based on the pairwise approach to ranking unlike the popular mert algorithm our pairwise ranking optimization method is not limited to a handful of parameters and can easily handle systems with thousands of featuresmoreover unlike recent approaches built upon the mira algorithm of crammer and singer pro is easy to implementit uses offtheshelf linear binary classifier software and can be built on top of an existing mert framework in a matter of hourswe establish pros scalability and effectiveness by comparing it to mert and mira and demonstrate parity on both phrasebased and syntaxbased systems in a variety of language pairs using large scale data scenariospro casts the problem of tuning as a ranking problem between pairs of translation candidateswe optimize ranking in nbest lists but learn parameters in an online fashionwe minimize logistic loss sampled from the merged nbests and sentencebleu is used for determining ranks
experimental support for a categorical compositional distributional model of meaning modelling compositional meaning for sentences using empirical distributional methods has been a challenge for computational linguists we implement the abstract categorical model of coecke et al using data from the bnc and evaluate it the implementation is based on unsupervised learning of matrices for relational words and applying them to the vectors of their arguments the evaluation is based on the word disambiguation task developed by mitchell and lapata for intransitive sentences and on a similar new experiment designed for transitive sentences our model matches the results of its competitors in the first experiment and betters them in the second the general improvement in results with increase in syntactic complexity showcases the compositional power of our model as competent language speakers we humans can almost trivially make sense of sentences we have never seen or heard beforewe are naturally good at understanding ambiguous words given a context and forming the meaning of a sentence from the meaning of its partsbut while human beings seem comfortable doing this machines fail to deliversearch engines such as google either fall back on bag of words modelsignoring syntax and lexical relationsor exploit superficial models of lexical semantics to retrieve pages with terms related to those in the query however such models fail to shine when it comes to processing the semantics of phrases and sentencesdiscovering the process of meaning assignment in natural language is among the most challenging and foundational questions of linguistics and computer sciencethe findings thereof will increase our understanding of cognition and intelligence and shall assist in applications to automating languagerelated tasks such as document searchcompositional typelogical approaches and distributional models of lexical semantics have provided two partial orthogonal solutions to the questioncompositional formal semantic models stem from classical ideas from mathematical logic mainly freges principle that the meaning of a sentence is a function of the meaning of its parts distributional models are more recent and can be related to wittgensteins later philosophy of meaning is use whereby meanings of words can be determined from their context the logical models relate to well known and robust logical formalisms hence offering a scalable theory of meaning which can be used to reason inferentiallythe distributional models have found their way into real world applications such as thesaurus extraction or automated essay marking and have connections to semantically motivated information retrieval this twosortedness of defining properties of meaning logical form versus contextual use has left the quest for what is the foundational structure of meaning even more of a challengerecently coecke et al used high level crossdisciplinary techniques from logic category theory and physics to bring the above two approaches togetherthey developed a unified mathematical framework whereby a sentence vector is by definition a function of the kronecker product of its word vectorsa concrete instantiation of this theory was exemplified on a toy hand crafted corpus by grefenstette et al in this paper we implement it by training the model over the entire bncthe highlight of our implementation is that words with relational types such as verbs adjectives and adverbs are matrices that act on their argumentswe provide a general algorithm for building these matrices from the corpusthe implementation is evaluated against the task provided by mitchell and lapata for disambiguating intransitive verbs as well as a similar new experiment for transitive verbsour model improves on the best method evaluated in mitchell and lapata and offers promising results for the transitive case demonstrating its scalability in comparison to that of other modelsbut we still feel there is need for a different class of experiments to showcase merits of compositionality in a statistically significant mannerour work shows that the categorical compositional distributional model of meaning permits a practical implementation and that this opens the way to the production of large scale compositional modelsformal semantics to compute the meaning of a sentence consisting of n words meanings of these words must interact with one anotherin formal semantics this further interaction is represented as a function derived from the grammatical structure of the sentence but meanings of words are amorphous objects of the domain no distinction is made between words that have the same typesuch models consist of a pairing of syntactic interpretation rules with semantic interpretation rules as exemplified by the simple model presented in figure 1the parse of a sentence such as cats like milk typically produces its semantic interpretation by substituting semantic representation for their grammatical constituents and applying βreduction where neededsuch a derivation is shown in figure 2this methodology is used to translate sentences of natural language into logical formulae then use computeraided automation tools to reason about them one major drawback is that the result of such analysis can only deal with truth or falsity as the meaning of a sentence and says nothing about the closeness in meaning or topic of expressions beyond their truthconditions and what models satisfy them hence do not perform well on language tasks such as searchfurthermore an underlying domain of objects and a valuation function must be provided as with any logic leaving open the question of how we might learn the meaning of language using such a model rather than just use itdistributional models distributional models of semantics on the other hand dismiss the interaction between syntactically linked words and are solely concerned with lexical semanticsword meaning is obtained empirically by examining the contexts1 in which a word appears and equating the meaning of a word with the distribution of contexts it sharesthe intuition is that context of use is what we appeal to in learning the meaning of a word and that words that frequently have the same sort of context in common are likely to be semantically relatedfor instance beer and sherry are both drinks alcoholic and often because a hangoverwe expect these facts to be reflected in a sufficiently large corpus the words beer and sherry occur within the 1eg words which appear in the same sentence or nword window or words which hold particular grammatical or dependency relations to the word being learned context of identifying words such as drink alcoholic and hangover more frequently than they occur with other content wordssuch context distributions can be encoded as vectors in a high dimensional space with contexts as basis vectorsfor any word vector word the scalar weight cword iassociated with each context basis vector ni is a function of the number of times the word has appeared in that contextsemantic vectors are also denoted by sums lof such weightbasis vector pairs learning a semantic vector is just learning its basis weights from the corpusthis setting offers geometric means to reason about semantic similarity as discussed in widdows the principal drawback of such models is their noncompositional nature they ignore grammatical structure and logical words and hence cannot compute the meanings of phrases and sentences in the same efficient way that they do for wordscommon operations discussed in such as vector addition and componentwise multiplication are commutative hence if vw v w or v o w then the dog bit the man the man bit the dog noncommutative operations such as the kronecker product can take wordorder into account or even some more complex syntactic relations as described in clark and pulman however the dimensionality of sentence vectors produced in this manner differs for sentences of different length barring all sentences from being compared in the same vector space and growing exponentially with sentence length hence quickly becoming computationally intractablewhereas semantic compositional mechanisms for settheoretic constructions are well understood there are no obvious corresponding methods for vector spacesto solve this problem coecke et al milkthe logical recipe tells us to apply the meaning of the verb to the meanings of subject and objectbut how can a vector apply to other vectorsthe solution proposed above implies that one needs to have different levels of meaning for words with different typesthis is similar to logical models where verbs are relations and nouns are atomic setsso verb vectors should be built differently from noun vectors for instance as matricesthe general information as to which words should be matrices and which words atomic vectors is in fact encoded in the typelogical representation of the grammatical structure of the sentencethis is the linear map with word vectors as input and sentence vectors as outputhence at least theoretically one should be able to build sentence vectors and compare their synonymity in exactly the same way as one measures word synonymitypregroup grammars the aforementioned linear maps turn out to be the grammatical reductions of a typelogic called a lambek pregroup grammar 2pregroups and vector spaces share the same high level mathematical structure referred to as a compact closed category for a proof and details of this claim see coecke et al for a friendly introduction to category theory see coecke and paquette one consequence of this parity is that the grammatical reductions of a pregroup grammar can be directly transformed into linear maps that act on vectorsin a nutshell pregroup types are either atomic or compoundatomic types can be simple or leftright superscriptedreferred to as adjoint types an example of a compound type is that of a verb nrsnlthe superscripted types express that the verb is a relation with two arguments of type n use the abstract setting of category theory to turn the grammatical structure of a sentence into a morphism compatible with the higher level logical structure of vector spacesone pragmatic consequence of this abstract idea is as followsin distributional models there is a meaning vector for each word eg cats like and which have to occur to the right and to the left of it and that it outputs an argument of the type s a transitive sentence has types as shown in figure 3each type n cancels out with its right adjoint nr from the right and its left adjoint nl from the left mathematically speaking these mean3 nln 1 and nnr 1 here 1 is the unit of concatenation 1n n1 n the corresponding grammatical reduction of a transitive sentence is nnrsnl 1s1 s each such reduction can be depicted as a wire diagramthe diagram of a transitive sentence is shown in figure 3cats like milkn nr s nl n syntaxguided semantic composition according to coecke et al and based on a general completeness theorem between compact categories wire diagrams and vector spaces the meaning of sentences can be canonically reduced to linear algebraic formulaethe following is the meaning vector of our transitive sentence cats like here f is the linear map that encodes the grammatical structurethe categorical morphism corresponding to it is denoted by the tensor product of 3 components ev 1s ew where v and w are subject and object spaces s is the sentence space the es are the cups and 1s is the straight line in the diagramthe cups stand for taking inner products which when done with the basis vectors imitate substitutionthe straight line stands for the identity map that does nothingby the rules of the category equation reduces to the following linear algebraic formula with 3the relation is the partial order of the pregroupit corresponds to implication in a logical reading thereofif these inequalities are replaced by equalities ie if nln 1 nnr then the pregroup collapses into a group where nl nr lower dimensions hence the dimensional explosion problem for kronecker products is avoided cats into the first argument place of the verb st is a basis vector of the sentence space s in which meanings of sentences live regardless of their grammatical structurethe degree of synonymity of sentences is obtained by taking the cosine measure of their vectorss is an abstract space it needs to be instantiated to provide concrete meanings and synonymity measuresfor instance a truththeoretic model is obtained by taking the sentence space s to be the 2dimensional space with basis vectors j1 and j0 in this section we present a general scheme to build matrices for relational wordsrecall that given a vector space a with basis nii the kronecker product of two vectors v e i can and ei cbini is defined as follows i z v w caicbj ij where is just the pairing of the basis of a iethe kronecker product vectors belong in the tensor product of a with itself a a hence if a has dimension r these will be of dimensionality r xrthe pointwise multiplication of these vectors is defined as follows v o w cai cbi ni i the intuition behind having a matrix for a relational word is that any relation r on sets x and y ier c_ x x y can be represented as a matrix namely one that has as rowbases x e x and as columnbases y e y with weight cxy 1 where e r and 0 otherwisein a distributional setting the weights which are natural or real numbers will represent more the extent according to which x and y are relatedthis can be determined in different wayssuppose x is the set of animals and chase is a relation on it chase x xtake x dog and y cat with our typelogical glasses on the obvious choice would be to take cxy to be the number of times dog has chased cat ie the number of times the sentence the dog chases the cat has appeared in the corpusbut in the distributional setting this method will be too syntactic and dismissive of the actual meaning of cat and dogif instead the corpus contains the sentence the hound hunted the wild cat cxy will be 0 restricting us to only assign meaning to sentences that have directly appeared in the corpuswe propose to instead use a level of abstraction by taking words such as verbs to be distributions over the semantic information in the vectors of their context words rather than over the context words themselvesstart with an rdimensional vector space n with basis n ii in which meaning vectors of atomic words such as nouns livethe basis vectors of n are in principle all the words from the corpus however in practice and following mitchell and lapata we had to restrict these to a subset of the most occurring wordsthese basis vectors are not restricted to nouns they can as well be verbs adjectives and adverbs so that we can define the meaning of a noun in all possible contextsas is usual in contextbased modelsand not only in the context of other nounsnote that basis words with relational types are treated as pure lexical items rather than as semantic objects represented as matricesin short we count how many times a noun has occurred close to words of other syntactic types such as elect and scientific rather than count how many times it has occurred close to their corresponding matrices it is the lexical tokens that form the context not their meaningeach relational word p with grammatical type 7r and m adjoint types α1 α2 αm is encoded as an matrix with m dimensionssince our vector space n has a fixed basis each such maaccording to the procedure described in figure 4linear algebraically this procedure corresponds to computing the following typelogical examples of relational words are verbs adjectives and adverbsa transitive verb is represented as a 2 dimensional matrix since its type is nrsnl with two adjoint types nr and nlthe corresponding vector of this matrix is lational word p and its arguments w1 w2 wm occurring in the same order as described in ps grammatical type 7rrefer to these sequences as prelationssuppose there are k of them2 retrieve the vector wl of each argument wl3 suppose w1 has weight c1i on basis vector n i w2 has weight c2j on basis vector n j and wm has weight cmζ on basis vector n ζmultiply these weights 4 repeat the above steps for all the k prelations and suma the corresponding weights awe also experimented with multiplication but the sparsity of noun vectors resulted in most verb matrices being emptythe weight cij corresponding to basis vector ten i te n j is the extent according to which words that have cooccurred with ten i have been the subject of the verb and words that have cooccurred with ten j have been the object of the verbthis example computation is demonstrated in figure 5as an example consider the verb show and suppose there are two showrelations in the corpus s1 table show result s2 map show location the vector of show is map eet eeeet location consider an n space with four basis vectors far room scientific and electthe tfidfweighted values for vectors of the above four nouns are as shown in table 1part of the matrix of show is presented in table 2as a sample computation the weight c11 for vector ie is computed by multiplying weights of table and result on far ie66x7 et multiplying weights of map and location on far ie56 x 59 then adding these 462 3304 and obtaining the total weight 7924the same method is applied to build matrices for ditransitive verbs which will have 3 dimensions and adjectives and adverbs which will be of 1 dimension eachmeaning of sentences are vectors computed by taking the variables of the categorical prescription of meaning to be determined by the matrices of the relational wordsfor instance the meaning of the transitive sentence sub verb obj is eeeeeeeet citjetst itj we take v w n and s n n then eitj citjets t is determined by the matrix of the verb ie substitute it by eij cij4hence eeeeeeeet sub verb obj becomes this can be decomposed to pointwise multiplication of two vectors as follows 4note that by doing so we are also reducing the verb space from n n to n n since for our construction we only need tuples of the form _it i n i _nj _it j which are isomorphic to pairs the left argument is the kronecker product of subject and object vectors and the right argument is the vector of the verb so we obtain since o is commutative this provides us with a distributional version of the typelogical meaning of the sentence pointwise multiplication of the meaning of the verb to the kronecker product of its subject and object sub verb obj verb o this mathematical operation can be informally described as a structured mixing of the information of the subject and object followed by it being filtered through the information of the verb applied to them in order to produce the information of the sentencein the transitive case 5 n n hence s t n i n jmore generally the vector space corresponding to the abstract sentence space 5 is the concrete tensor space for m the dimension of the matrix of the verbas we have seen above in practice we do not need to build this tensor space as the computations thereof reduce to pointwise multiplications and summationssimilar computations yield meanings of sentences with adjectives and adverbsfor instance the meaning of a transitive sentence with a modified subject and a modified verb we have adj sub verb obj adv o o j after building vectors for sentences we can compare their meaning and measure their degree of synonymy by taking their cosine measureevaluating such a framework is no easy taskwhat to evaluate depends heavily on what sort of application a practical instantiation of the model is geared towardsin it is suggested that the simplified model we presented and expanded here could be evaluated in the same way as lexical semantic models measuring compositionally built sentence vectors against a benchmark dataset such as that provided by mitchell and lapata in this section we briefly describe the evaluation of our model against this datasetfollowing this we present a new evaluation task extending the experimental methodology of mitchell and lapata to transitive verbcentric sentences and compare our model to those discussed by mitchell and lapata within this new experimentfirst dataset description the first experiment described in detail by mitchell and lapata evaluates how well compositional models disambiguate ambiguous words given the context of a potentially disambiguating nouneach entry of the dataset provides a noun a target verb and landmark verb the noun must be composed with both verbs to produce short phrase vectors the similarity of which is measured by the candidatealso provided with each entry is a classification indicating whether or not the verbs are indeed semantically close within the context of the noun as well as an evaluatorset similarity score between 1 and 7 where 1 is low similarity and 7 is highevaluation methodology candidate models provide a similarity score for each entrythe scores of high similarity entries and low similarity entries are averaged to produce a mean high score and mean low score for the modelthe correlation of the models similarity judgements with the human judgements is also calculated using spearmans p a metric which is deemed to be more scrupulous and ultimately that by which models should be ranked by mitchell and lapata the mean for each model is on a 0 1 scale except for upperbound which is on the same 1 7 scale the annotators usedthe p scores are on a 1 1 scaleit is assumed that interannotator agreement provides the theoretical maximum p for any model for this experimentthe cosine measure of the verb vectors ignoring the noun is taken to be the baseline other models the other models we compare ours to are those evaluated by mitchell and lapata we provide a selection of the results from that paper for the worst and best5 performing models as well as the previous secondbest performing model the additive and multiplicative models are simply applications of vector addition and componentwise multiplicationwe invite the reader to consult for the description of kintschs additive model and parametric choicesmodel parameters to provide the most accurate comparison with the existing multiplicative model and exploiting the aforementioned feature that the categorical model can be built on top of existing lexical distributional models we used the parameters described by mitchell and lapata to reproduce the vectors evaluated in the original experiment as our noun vectorsall vectors were built from a lemmatised version of the bncthe noun basis was the 2000 most common context words basis weights were the probability of context words given the target word divided by the overall probability of the context wordintransitive verb functionvectors were trained using the procedure presented in 4since the dataset only contains intransitive verbs and nouns we used s n the cosine measure of vectors was used as a similarity metricfirst experiment results in table 3 we present the comparison of the selected modelsour categorical model performs significantly better than the existing secondplace and obtains a ρ quasiidentical to the multiplicative model indicating significant correlation with the annotator scoresthere is not a large difference between the mean high score and mean low score but the distribution in figure 6 shows that our model makes a nonnegligible distinction between high similarity phrases and low similarity phrases despite the absolute scores not being different by more than a few percentilessecond dataset description the second dataset6 developed by the authors follows the format of the dataset used for the first experiment with the exception that the target and landmark verbs are transitive and an object noun is provided in addition to the subject noun hence forming a small transitive sentencethe dataset comprises 200 entries consisting of sentence pairs constructed by following the procedure outlined in 4 of using transitive verbs from celex7for examples of these sentences see table 4the dataset was split into four sections of 100 entries each with guaranteed 50 exclusive overlap with exactly two other datasetseach section was given to a group of evaluators with a total of 25 who were asked to form simple transitive sentence pairs from the verbs subject and object provided in each entry for instance the table showed the result from table show resultthe evaluators were then asked to rate the semantic similarity of each verb pair within the context of those sentences and offer a score between 1 and 7 for each entryeach entry was given an arbitrary classification of high or low by the authors for the purpose of calculating mean highlow scores for each modelfor example the first two pairs in table 4 were classified as high whereas the second two pairs as lowevaluation methodology the evaluation methodology for the second experiment was identical to that of the first as are the scales for means and scoreshere also spearmans ρ is deemed a more rigorous way of determining how well a model tracks difference in meaningthis is both because of the imprecise nature of the classification of verb pairs as high or low and since the objective similarity scores produced by a model that distinguishes sentences of different meaning from those of similar meaning can be renormalised in practicetherefore the delta between high means and low mean cannot serve as a definite indication of the practical applicability of semantic models the means are provided just to aid comparison with the results of the first experimentmodel parameters as in the first experiment the lexical vectors from were used for the other models evaluated 8 and for the noun vec8kintsch was not evaluated as it required optimising model parameters against a heldout segment of the test set and we could not replicate the methodology of mitchell and lapata tors of our categorical modeltransitive verb vectors were trained as described in 4 with s nnsecond experiment results the results for the models evaluated against the second dataset are presented in table 5we observe a significant improvement in the alignment of our categorical model with the human judgements from 017 to 021the additive model continues to make little distinction between senses of the verb during composition and the multiplicative models alignment does not change but becomes statistically indistinguishable from the noncompositional baseline modelonce again we note that the highlow means are not very indicative of model performance as the difference between high mean and the low mean of the categorical model is much smaller than that of the both the baseline model and multiplicative model despite better alignment with annotator judgementsin this paper we described an implementation of the categorical model of meaning which combines the formal logical and the empirical distributional frameworks into a unified semantic modelthe implementation is based on building matrices for words with relational types and vectors for words with atomic types based on data from the bncwe then show how to apply verbs to their subjectobject in order to compute the meaning of intransitive and transitive sentences with full confidenceother work uses matrices to model meaning but only for adjectivenoun phrasesour approach easily applies to such compositions as well as to sentences containing combinations of adjectives nouns verbs and adverbsthe other key difference is that they learn their matrices in a topdown fashion ie by regression from the composite adjectivenoun context vectors whereas our model is bottomup it learns sentencephrase meaning compositionally from the vectors of the compartments of the compositesfinally very similar functions for example a verb with argument alternations such as break in y breaks and x breaks y are not treated as unrelatedthe matrix of the intransitive break uses the corpusobserved information about the subject of break including that of y similarly the matrix of the transitive break uses information about its subject and object including that of x and ywe leave a thorough study of these phenomena which fall under providing a modular representation of passiveactive similarities to future workwe evaluated our model in two ways first against the word disambiguation task of mitchell and lapata for intransitive verbs and then against a similar new experiment for transitive verbs which we developedour findings in the first experiment show that the categorical method performs on par with the leading existing approachesthis should not surprise us given that the context is so small and our method becomes similar to the multiplicative model of mitchell and lapata however our approach is sensitive to grammatical structure leading us to develop a second experiment taking this into account and differentiating it from models with commutative composition operationsthe second experiments results deliver the expected qualitative difference between models with our categorical model outperforming the others and showing an increase in alignment with human judgements in correlation with the increase in sentence complexitywe use this second evaluation principally to show that there is a strong case for the development of more complex experiments measuring not only the disambiguating qualities of compositional models but also their syntactic sensitivity which is not directly measured in the existing experimentsthese results show that the high level categorical distributional model uniting empirical data with logical form can be implemented just like any other concrete modelfurthermore it shows better results in experiments involving higher syntactic complexitythis is just the tip of the iceberg the mathematics underlying the implementation ensures that it uniformly scales to larger more complicated sentences and enables it to compare synonymity of sentences that are of different grammatical structuretreatment of function words such as that who as well as logical words such as quantifiers and conjunctives are left to future workthis will build alongside the general guidelines of coecke et al and concrete insights from the work of widdows it is not yet entirely clear how existing settheoretic approaches for example that of discourse representation and generalised quantifiers apply to our settingpreliminary work on integration of the two has been presented by preller and more recently also by preller and sadrzadeh as mentioned by one of the reviewers our pregroup approach to grammar flattens the sentence representation in that the verb is applied to its subject and object at the same time whereas in other approaches such as ccg it is first applied to the object to produce a verb phrase then applied to the subject to produce the sentencethe advantages and disadvantages of this method and comparisons with other systems in particular ccg constitutes ongoing workwe wish to thank p blunsom s clark b coecke s pulman and the anonymous emnlp reviewers for discussions and commentssupport from epsrc grant epf0427281 is gratefully acknowledged by m sadrzadeh
D11-1129
experimental support for a categorical compositional distributional model of meaningmodeling compositional meaning for sentences using empirical distributional methods has been a challenge for computational linguistswe implement the abstract categorical model of coecke et al using data from the bnc and evaluate itthe implementation is based on unsupervised learning of matrices for relational words and applying them to the vectors of their argumentsthe evaluation is based on the word disambiguation task developed by mitchell and lapata for intransitive sentences and on a similar new experiment designed for transitive sentencesour model matches the results of its competitors in the first experiment and betters them in the secondthe general improvement in results with increase in syntactic complexity showcases the compositional power of our modelwe suggest that relational word vectors live in a space the dimensionality of which be a function of the arity of the relation
named entity recognition in tweets an experimental study people tweet more than 100 million times daily yielding a noisy informal but sometimes informative corpus of 140character messages that mirrors the zeitgeist in an unprecedented manner the performance of standard nlp tools is severely degraded on tweets this paper addresses this issue by rebuilding the nlp pipeline beginning with partofspeech tagging through chunking to recognition our novel doubles compared with the ner system the redundancy inherent in tweets to achieve this performance using labeledlda to exploit freebase dictionaries as a source of distant supervision labeledlda outperforms coincreasing 25 over ten common entity types nlp tools are available at status messages posted on social media websites such as facebook and twitter present a new and challenging style of text for language technology due to their noisy and informal naturelike sms tweets are particularly terse and difficult yet tweets provide a unique compilation of information that is more uptodate and inclusive than news articles due to the lowbarrier to tweeting and the proliferation of mobile devices1 the corpus of tweets already exceeds the size of the library of congress and is growing far more rapidlydue to the volume of tweets it is natural to consider namedentity recognition information extraction and text mining over tweetsnot surprisingly the performance of off the shelf nlp tools which were trained on news corpora is weak on tweet corporain response we report on a retrained nlp pipeline that leverages previouslytagged outofdomain text 2 tagged tweets and unlabeled tweets to achieve more effective partofspeech tagging chunking and namedentity recognition1 the hobbit has finally started filmingi cannot wait2 yessyessits official nintendo announced today that they will release the nintendo 3ds in north america march 27 for 250 3 government confirms blast n nuclear plants n japando not knw wht s gona happen nw we find that classifying named entities in tweets is a difficult task for two reasonsfirst tweets contain a plethora of distinctive named entity types almost all these types are relatively infrequent so even a large sample of manually annotated tweets will contain few training examplessecondly due to twitters 140 character limit tweets often lack sufficient context to determine an entitys type without the aid of background knowledgeto address these issues we propose a distantly supervised approach which applies labeledlda to leverage large amounts of unlabeled data in addition to large dictionaries of entities gathered from freebase and combines information about an entitys context across its mentionswe make the following contributions labeledlda is applied utilizing constraints based on an opendomain database as a source of supervisionthis approach increases f1 score by 25 relative to cotraining on the task of classifying named entities in tweetsthe rest of the paper is organized as followswe successively build the nlp pipeline for twitter feeds in sections 2 and 3we first present our approaches to shallow syntax part of speech tagging and shallow parsing 23 describes a novel classifier that predicts the informativeness of capitalization in a tweetall tools in 2 are used as features for named entity segmentation in 31next we present our algorithms and evaluation for entity classification we describe related work in 4 and conclude in 5we first study two fundamental nlp tasks pos tagging and nounphrase chunkingwe also discuss a novel capitalization classifier in 23the outputs of all these classifiers are used in feature generation for named entity recognition in the next sectionfor all experiments in this section we use a dataset of 800 randomly sampled tweetsall results represent 4fold crossvalidation experiments on the respective tasks3 part of speech tagging is applicable to a wide range of nlp tasks including named entity segmentation and information extractionprior experiments have suggested that pos tagging has a very strong baseline assign each word to its most frequent tag and assign each out of vocabulary word the most common pos tagthis baseline obtained a 09 accuracy on the brown corpus however the application of a similar baseline on tweets obtains a much weaker 076 exposing the challenging nature of twitter dataa key reason for this drop in accuracy is that twitter contains far more oov words than grammatical textmany of these oov words come from spelling variation eg the use of the word n for in in table 1 example 3although nnp is the most frequent tag for oov words only about 13 are nnpsthe performance of offtheshelf newstrained pos taggers also suffers on twitter datathe stateoftheart stanford pos tagger improves on the baseline obtaining an accuracy of 08this performance is impressive given that its training data the penn treebank wsj is so different in style from twitter however it is a huge drop from the 97 accuracy reported on the ptbthere are several reasons for this drop in performancetable 3 lists common errors made by the stanford taggerfirst due to unreliable capitalization common nouns are often misclassified as proper nouns and vice versaalso interjections and verbs are frequently misclassified as nounsin addition to differences in vocabulary the grammar of tweets is quite different from edited news textfor instance tweets often start with a verb as in watchng american dad to overcome these differences in style and vocabulary we manually annotated a set of 800 tweets with tags from the penn treebank tag set for use as indomain training data for our pos tagging system tpos4 we add new tags for the twitter specific phenomena retweets usernames hashtags and urlsnote that words in these categories can be tagged with 100 accuracy using simple regular expressionsto ensure fair comparison in table 2 we include a postprocessing step which tags these words appropriately for all systemsto help address the issue of oov words and lexical variations we perform clustering to group together words which are distributionally similar in particular we perform hierarchical clustering using jcluster on 52 million tweets each word is uniquely represented by a bit string based on the path from the root of the resulting hierarchy to the words leafwe use the brown clusters resulting from prefixes of 4 8 and 12 bitsthese clusters are often effective in capturing lexical variations for example following are lexical variations on the word tomorrow from one cluster after filtering out other words tpos uses conditional random fields5 both because of their ability to model strong dependencies between adjacent pos tags and also to make use of highly correlated features besides employing the brown clusters computed above we use a fairly standard set of features that include pos dictionaries spelling and contextual featureson a 4fold cross validation over 800 tweets tpos outperforms the stanford tagger obtaining a 26 reduction in errorin addition we include 40k tokens of annotated irc chat data which is similar in stylelike twitter irc data contains many misspelledabbreviated words and also more pronouns and interjections but fewer determiners than newsfinally we also leverage 50k poslabeled tokens from the penn treebank overall tpos trained on 102k tokens results in a 41 error reduction over the stanford tagger obtaining an accuracy of 0883table 3 lists gains on some of the most common error types for example tpos dramatically reduces error on interjections and verbs that are incorrectly classified as nouns by the stanford taggershallow parsing or chunking is the task of identifying nonrecursive phrases such as noun phrases verb phrases and prepositional phrases in textaccurate shallow parsing of tweets could benefit several applications such as information extraction and named entity recognitionoff the shelf shallow parsers perform noticeably worse on tweets motivating us again to annotate indomain training datawe annotate the same set of 800 tweets mentioned previously with tags from the conll shared task we use the set of shallow parsing features described by sha and pereira in addition to the brown clusters mentioned abovepartofspeech tag features are extracted based on crossvalidation output predicted by tposfor inference and learning again we use conditional random fieldswe utilize 16k tokens of indomain training data in addition to 210k tokens of newswire text from the conll datasettable 4 reports tchunks performance at shallow parsing of tweetswe compare against the offthe shelf opennlp chunker6 obtaining a 22 reduction in errora key orthographic feature for recognizing named entities is capitalization unfortunately in tweets capitalization is much less reliable than in edited textsin addition there is a wide variety in the styles of capitalizationin some tweets capitalization is informative whereas in other cases nonentity words are capitalized simply for emphasissome tweets contain all lowercase words whereas others are in all caps to address this issue it is helpful to incorporate information based on the entire content of the message to determine whether or not its capitalization is informativeto this end we build a capitalization classifier tcap which predicts whether or not a tweet is informatively capitalizedits output is used as a feature for named entity recognitionwe manually labeled our 800 tweet corpus as having either informative or uninformative capitalizationthe criteria we use for labeling is as follows if a tweet contains any nonentity words which are capitalized but do not begin a sentence or it contains any entities which are not capitalized then its capitalization is uninformative otherwise it is informativefor learning we use support vector machines7 the features used include the fraction of words in the tweet which are capitalized the fraction which appear in a dictionary of frequently lowercasecapitalized words but are not lowercasecapitalized in the tweet the number of times the word i appears lowercase and whether or not the first word in the tweet is capitalizedresults comparing against the majority baseline which predicts capitalization is always informative are shown in table 5additionally in 3 we show that features based on our capitalization classifier improve performance at named entity segmentationwe now discuss our approach to named entity recognition on twitter dataas with pos tagging and shallow parsing off the shelf namedentity recognizers perform poorly on tweetsfor example applying the stanford named entity recognizer to one of the examples from table 1 results in the following output nintendoloc announced today that they will release the nintendoorg 3ds in north americaloc march 27 for 250 the oov word yess is mistaken as a named entityin addition although the first occurrence of nintendo is correctly segmented it is misclassified whereas the second occurrence is improperly segmented it should be the product nintendo 3dsfinally north america should be segmented as a location rather than just americain general newstrained named entity recognizers seem to rely heavily on capitalization which we know to be unreliable in tweetsfollowing collins and singer downey et al and elsner et al we treat classification and segmentation of named entities as separate tasksthis allows us to more easily apply techniques better suited towards each taskfor example we are able to use discriminative methods for named entity segmentation and distantly supervised approaches for classificationwhile it might be beneficial to jointly model segmentation and classification using a joint sequence labeling and topic model similar to that proposed by sauper et al we leave this for potential future workbecause most words found in tweets are not part of an entity we need a larger annotated dataset to effectively learn a model of named entitieswe therefore use a randomly sampled set of 2400 tweets for nerall experiments report results using 4fold cross validation they can refer to people or companies we believe they could be more easily classified using features of their associated users profile than contextual features of the texttseg models named entity segmentation as a sequencelabeling task using iob encoding for representing segmentations and uses conditional random fields for learning and inferenceagain we include orthographic contextual and dictionary features our dictionaries included a set of type lists gathered from freebasein addition we use the brown clusters and outputs of tpos tchunk and tcap in generating featureswe report results at segmenting named entities in table 6compared with the stateoftheart newstrained stanford named entity recognizer tseg obtains a 52 increase in f1 scorebecause capitalization in twitter is less informative than news indomain data is needed to train models which rely less heavily on capitalization and also are able to utilize features provided by tcapwe exhaustively annotated our set of 2400 tweets with named entities8 a convention on twitter is to refer to other users using the symbol followed by their unique usernamewe deliberately choose not to annotate usernames as entities in our data set because they are both unambiguous and trivial to identify with 100 accuracy using a simple regular expression and would only serve to inflate our performance statisticswhile there is ambiguity as to the type of usernames we can determine it is likely a reference to a television show since it often cooccurs with words such as watching and premieres in other contexts9 in order to handle the problem of many infrequent types we leverage large lists of entities and their types gathered from an opendomain ontology as a source of distant supervision allowing use of large amounts of unlabeled data in learningfreebase baseline although freebase has very broad coverage simply looking up entities and their types is inadequate for classifying named entities in context for example according to freebase the mention china could refer to a country a band a person or a filmthis problem is very common 35 of the entities in our data appear in more than one of our freebase dictionariesadditionally 30 of entities mentioned on twitter do not appear in any freebase dictionary as they are either too new or are misspelled or abbreviated distant supervision with topic models to model unlabeled entities and their possible types we apply labeledlda constraining each entitys distribution over topics based on its set of possible types according to freebasein contrast to previous weakly supervised approaches to named entity classification for example the cotraining and naive bayes models of collins and singer labeledlda models each entity string as a mixture of types rather than using a single hidden variable to represent the type of each mentionthis allows information about an entitys distribution over types to be shared across mentions naturally handling ambiguous entity strings whose mentions could refer to different typeseach entity string in our data is associated with a bag of words found within a context window around all of its mentions and also within the entity itselfas in standard lda each bag of words is associated with a distribution over topics multinomial and each topic is associated with a distribution over words multinomialin addition there is a onetoone mapping between topics and freebase type dictionariesthese dictionaries constrain oe the distribution over topics for each entity string based on its set of possible types fbefor example oamazon could correspond to a distribution over two types company and location whereas oapple might represent a distribution over company and foodfor entities which are not found in any of the freebase dictionaries we leave their topic distributions oe unconstrainednote that in absence of any constraints labeledlda reduces to standard lda and a fully unsupervised setting similar to that presented by elsner et alin detail the generative process that models our data for named entity classification is as follows generate zei from multgenerate the word wei from multto infer values for the hidden variables we apply collapsed gibbs sampling where parameters are integrated out and the zeis are sampled directlyin making predictions we found it beneficial to consider otrain e as a prior distribution over types for entities which were encountered during trainingin practice this sharing of information across contexts is very beneficial as there is often insufficient evidence in an isolated tweet to determine an entitys typefor entities which were not encountered during training we instead use a prior based on the distribution of types across all entitiesone approach to classifying entities in context is to assume that otrain e is fixed and that all of the words inside the entity mention and context w are drawn based on a single topic z that is they are all drawn from multinomialwe can then compute the posterior distribution over types in closed form with a simple application of bayes rule during development however we found that rather than making these assumptions using gibbs sampling to estimate the posterior distribution over types performs slightly betterin order to make predictions for each entity we use an informative dirichlet prior based on otrain e and perform 100 iterations of gibbs sampling holding the hidden topic variables in the training data fixed fewer iterations are needed than in training since the typeword distributions β have already been inferredto evaluate tclasss ability to classify entity mentions in context we annotated the 2400 tweets with 10 types which are both popular on twitter and have good coverage in freebase person geolocation company product facility tvshow movie sportsteam band and othernote that these type annotations are only used for evaluation purposes and not used during training tclass which relies only on distant supervisionin some cases we combine multiple freebase types to create a dictionary of entities representing a single type because our approach does not rely on any manually labeled examples it is straightforward to extend it for a different sets of types based on the needs of downstream applicationstraining to gather unlabeled data for inference we run tseg our entity segmenter on 60m tweets and keep the entities which appear 100 or more timesthis results in a set of 23651 distinct entity stringsfor each entity string we collect words occurring in a context window of 3 words from all mentions in our data and use a vocabulary of the 100k most frequent wordswe run gibbs sampling for 1000 iterations using the last sample to estimate entitytype distributions oe in addition to typeword distributions βttable 7 displays the 20 entities whose posterior distribution oe assigns highest probability to selected typesresults table 8 presents the classification results of tclass compared against a majority baseline which simply picks the most frequent class in addition to the freebase baseline which only makes predictions if an entity appears in exactly one dictionary tclass also outperforms a simple supervised baseline which applies a maxent classifier using 4fold cross validation over the 1450 entities which were annotated for testingadditionally we compare against the cotraining algorithm of collins and singer which also leverages unlabeled data and uses our freebase type lists for seed rules we use the unambiguous freebase entitiesour results demonstrate that tclass outperforms the baselines and achieves a 25 increase in f1 score over cotrainingtables 9 and 10 present a breakdown of f1 scores by type both collapsing types into the standard classes used in the muc competitions and using the 10 popular twitter types described earlierentity strings vsentity mentions dlcotrain and labeledlda use two different representations for the unlabeled data during learninglabeledlda groups together words across all mentions of an entity string and infers a distribution over its possible types whereas dlcotrain considers the entity mentions separately as unlabeled examples and predicts a type independently for eachin order to ensure that the difference in performance between labeledlda and dlcotrain is not simply due to this difference in representation we compare both dlcotrain and labeledlda using both unlabeled datasets in table 11as expected dlcotrain performs poorly when the unlabeled examples group mentions this makes sense since cotraining uses a discriminative learning algorithm so when trained on entities and tested on individual mentions the performance decreasesadditionally labeledldas performance is poorer when considering mentions as documentsthis is likely due to the fact that there is not enough context to effectively learn topics when the documents are very short end to end system finally we present the end to end performance on segmentation and classification in table 12we observe that tner again outperforms cotrainingmoreover comparing against the stanford named entity recognizer on the 3 muc types tner doubles fi scorethere has been relatively little previous work on building nlp tools for twitter or similar text styleslocke and martin train a classifier to recognize named entities based on annotated twitter data handling the types person location and organizationdeveloped in parallel to our work liu et al investigate ner on the same 3 types in addition to products and present a semisupervised approach using knearest neighboralso ing topic models for classifying developed in parallel gimpell et al build a named entities has a similar effect in that informapos tagger for tweets using 20 coarsegrained tags tion about an entitys distribution of possible types benson et al present a system which ex is shared across its mentions tracts artists and venues associated with musical per 5 conclusions formancesrecent work has proposed lexical normaliza tagging chunking and named entity recognition tion of tweets which may be useful as a preprocess perform quite poorly when applied to tweetsto ing step for the upstream tasks like pos tagging and address this challenge we have annotated tweets and nerin addition finin et al investigate built tools trained on unlabeled indomain and outthe use of amazons mechanical turk for annotat ofdomain data showing substantial improvement ing named entities in twitter minkov et al over their stateofthe art newstrained counterparts investigate person name recognizers in email and for example tpos outperforms the stanford pos singh et al apply a minimally supervised tagger reducing error by 41additionally we approach to extracting entities from text advertise have shown the benefits of features generated from mentstpos and tchunk in segmenting named entitiesin contrast to previous work we have demon we identified named entity classification as a parstrated the utility of features based on twitter ticularly challenging task on twitterdue to their specific pos taggers and shallow parsers in seg terse nature tweets often lack enough context to menting named entitiesin addition we take a dis identify the types of the entities they containin adtantly supervised approach to named entity classi dition a plethora of distinctive named entity types fication which exploits large dictionaries of entities are present necessitating large amounts of training gathered from freebase requires no manually anno datato address both these issues we have presented tated data and as a result is able to handle a larger and evaluated a distantly supervised approach based number of types than previous workalthough we on labeledlda which obtains a 25 increase in f1 found manually annotated data to be very beneficial score over the cotraining approach to named enfor named entity segmentation we were motivated tity classification suggested by collins and singer to explore approaches that do not rely on manual la when applied to twitter bels for classification due to twitters wide range of our pos tagger chunker named entity recnamed entity typesadditionally unlike previous ognizer are available for use by the research work on ner in informal text our approach allows community httpgithubcomaritter the sharing of information across an entitys men twitter_nlp tions which is quite beneficial due to twitters terse acknowledgments naturewe would like to thank stephen soderland dan previous work on semantic bootstrapping has weld and luke zettlemoyer in addition to the taken a weaklysupervised approach to classifying anonymous reviewers for helpful comments on a named entities based on large amounts of unla previous draftthis research was supported in part beled text in contrast rather than national defense science and engineering graduate predicting which classes an entity belongs to fellowship 32 cfr 168a and carried out a multilabel classification task labeledlda esti at the university of washingtons turing center mates a distribution over its types which is then useful as a prior when classifying mentions in contextin addition there has been been work on skipchain crfs which enforce consistency when classifying multiple occurrences of an entity within a documentus1532
D11-1141
named entity recognition in tweets an experimental studypeople tweet more than 100 million times daily yielding a noisy informal but sometimes informative corpus of 140character messages that mirrors the zeitgeist in an unprecedented mannerthe performance of standard nlp tools is severely degraded on tweetsthis paper addresses this issue by rebuilding the nlp pipeline beginning with partofspeech tagging through chunking to namedentity recognitionour novel tner system doubles f1 score compared with the stanford ner systemtner leverages the redundancy inherent in tweets to achieve this performance using labeledlda to exploit freebase dictionaries as a source of distant supervisionlabeledlda outperforms cotraining increasing f1 by 25 over ten common entity typesour nlp tools are available at http githubcomarittertwitter_nlpwe use token unigrams as features including any hash tags but ignoring twitter mentions urls and purely numeric tokensour system exploits a crf model to segment named entities and then uses a distantly supervised approach based on labeledlda to classify named entities
identifying relations for open information extraction open information extraction is the task of extracting assertions from massive corpora without requiring a prespecified vocabulary this paper shows that the output of stateoftheart open ie systems is rife with uninformative and incoherent extractions to overcome these problems we introduce two simple syntactic and lexical constraints on binary relations expressed by verbs we imthe constraints in the open ie system which more than doubles the area under the precisionrecall curve relative previous extractors such as more than are at precision higher compared to virtually none for earlier systems the paper concludes with a detailed analysis typically information extraction systems learn an extractor for each target relation from labeled training examples this approach to ie does not scale to corpora where the number of target relations is very large or where the target relations cannot be specified in advanceopen ie solves this problem by identifying relation phrasesphrases that denote relations in english sentences the automatic identification of relation phrases enables the extraction of arbitrary relations from sentences obviating the restriction to a prespecified vocabularyopen ie systems have achieved a notable measure of success on massive opendomain corpora drawn from the web wikipedia and elsewherethe output of open ie systems has been used to support tasks like learning selectional preferences acquiring common sense knowledge and recognizing entailment in addition open ie extractions have been mapped onto existing ontologies we have observed that two types of errors are frequent in the output of open ie systems such as textrunner and woe incoherent extractions and uninformative extractionsincoherent extractions are cases where the extracted relation phrase has no meaningful interpretation incoherent extractions arise because the learned extractor makes a sequence of decisions about whether to include each word in the relation phrase often resulting in incomprehensible predictionsto solve this problem we introduce a syntactic constraint every multiword relation phrase must begin with a verb end with a preposition and be a contiguous sequence of words in the sentencethus the identification of a relation phrase is made in one fell swoop instead of on the basis of multiple wordbyword decisionsuninformative extractions are extractions that omit critical informationfor example consider the sentence faust made a deal with the devil previous open ie systems return the uninformative instead of this type of error is caused by improper handling of relation phrases that are expressed by a combination of a verb with a noun such as light verb constructions an lvc is a multiword expression composed of a verb and a noun with the noun carrying the semantic content of the predicate table 2 illustrates the wide range of relations expressed this way which are not captured by existing open extractorsour syntactic constraint leads the extractor to include nouns in the relation phrase solving this problemalthough the syntactic constraint significantly reduces incoherent and uninformative extractions it allows overlyspecific relation phrases such as is offering only modest greenhouse gas reduction targets atto avoid overlyspecific relation phrases we introduce an intuitive lexical constraint a binary relation phrase ought to appear with at least a minimal number of distinct argument pairs in a large corpusin summary this paper articulates two simple but surprisingly powerful constraints on how binary relationships are expressed via verbs in english sentences and implements them in the reverb open ie systemwe release reverb and the data used in our experiments to the research communitythe rest of the paper is organized as followssection 2 analyzes previous worksection 3 defines our constraints preciselysection 4 describes reverb our implementation of the constraintssection 5 reports on our experimental resultssection 6 concludes with a summary and discussion of future workopen ie systems like textrunner woepos and woeparse focus on extracting binary relations of the form from textthese systems all use the following threestep method put identifies a candidate pair of np arguments from the sentence and then uses the learned extractor to label each word between the two arguments as part of the relation phrase or notthe extractor is applied to the successive sentences in the corpus and the resulting extractions are collectedthis method faces several challengesfirst the training phase requires a large number of labeled training examples heuristic labeling of examples obviates hand labeling but results in noisy labels and distorts the distribution of examplessecond the extraction step is posed as a sequencelabeling problem where each word is assigned its own labelbecause each assignment is uncertain the likelihood that the extracted relation phrase is flawed increases with the length of the sequencefinally the extractor chooses an extractions arguments heuristically and cannot backtrack over this choicethis is problematic when a word that belongs in the relation phrase is chosen as an argument because of the feature sets utilized in previous work the learned extractors ignore both holistic aspects of the relation phrase as well as lexical aspects thus as we show in section 5 systems such as textrunner are unable to learn the constraints embedded in reverbof course a learning system utilizing a different hypothesis space and an appropriate set of training examples could potentially learn and refine the constraints in reverbthis is a topic for future work which we consider in section 6the first open ie system was textrunner which used a naive bayes model with unlexicalized pos and npchunk features trained using examples heuristically generated from the penn treebanksubsequent work showed that utilizing a linearchain crf or markov logic network can lead to improved extractionthe woe systems introduced by wu and weld make use of wikipedia as a source of training data for their extractors which leads to further improvements over textrunner wu and weld also show that dependency parse features result in a dramatic increase in precision and recall over shallow linguistic features but at the cost of extraction speedother approaches to largescale ie have included preemptive ie ondemand ie and weak supervision for ie preemptive ie and ondemand ie avoid relationspecific extractors but rely on document and entity clustering which is too costly for webscale ieweakly supervised methods use an existing ontology to generate training data for learning relationspecific extractorswhile this allows for learning relationspecific extractors at a larger scale than what was previously possible the extractions are still restricted to a specific ontologymany systems have used syntactic patterns based on verbs to extract relation phrases usually relying on a full dependency parse of the input sentence our work differs from these approaches by focusing on relation phrase patterns expressed in terms of pos tags and np chunks instead of full parse treesbanko and etzioni showed that a small set of postag patterns cover a large fraction of relationships in english but never incorporated the patterns into an extractorthis paper reports on a substantially improved model of binary relation phrases which increases the recall of the bankoetzioni model further while previous work in open ie has mainly focused on syntactic patterns for relation extraction we introduce a lexical constraint that boosts precision and recallfinally open ie is closely related to semantic role labeling in that both tasks extract relations and arguments from sentenceshowever srl systems traditionally rely on syntactic parsers which makes them susceptible to parser errors and substantially slower than open ie systems such as reverbthis difference is particularly important when operating on the web corpus due to its size and heterogeneityfinally srl requires handconstructed semantic resources like propbank and framenet as inputin contrast open ie systems require no relationspecific training datareverb in particular relies on its explicit lexical and syntactic constraints which have no correlate in srl systemsfor a more detailed comparison of srl and open ie see in this section we introduce two constraints on relation phrases a syntactic constraint and a lexical constraintthe syntactic constraint serves two purposesfirst it eliminates incoherent extractions and second it reduces uninformative extractions by capturing relation phrases expressed by a verbnoun combination including light verb constructions few possible instances even in a webscale corpusconsider the sentence the obama administration is offering only modest greenhouse gas reduction targets at the conferencethe pos pattern will match the phrase is offering only modest greenhouse gas reduction targets at thus there are phrases that satisfy the syntactic constraint but are not relationalto overcome this limitation we introduce a lexical constraint that is used to separate valid relation phrases from overspecified relation phrases like the example in the constraint is based on the intuition that a valid relation phrase should take many distinct arguments in a large corpusthe phrase in is specific to the argument pair so it is unlikely to represent a bona fide relationwe describe the implementation details of the lexical constraint in section 433 limitations our constraints represent an idealized model of relation phrases in englishthis raises the question how much recall is lost due to the constraintsto address this question we analyzed wu and welds set of 300 sentences from a set of random web pages manually identifying all verbbased relationships between noun phrase pairsthis resulted in a set of 327 relation phrasesfor each relation phrase we checked whether it satisfies our constraintswe found that 85 of the relation phrases do satisfy the constraintsof the remaining 15 we identified some of the common cases where the constraints were violated summarized in table 3many of the example relation phrases shown in table 3 involve longrange dependencies between words in the sentencethese types of dependencies are not easily representable using a pattern over pos tagsa deeper syntactic analysis of the input sentence would provide a much more general language for modeling relation phrasesfor example one could create a model of relations expressed in figure 1 a simple partofspeechbased regular expression reduces the number of incoherent extractions like was central torpedo and covers relations expressed via light verb constructions like gave a talk atthe syntactic constraint requires the relation phrase to match the pos tag pattern shown in figure 1the pattern limits relation phrases to be either a verb a verb followed immediately by a preposition or a verb followed by nouns adjectives or adverbs ending in a preposition if there are multiple possible matches in a sentence for a single verb the longest possible match is chosenfinally if the pattern matches multiple adjacent sequences we merge them into a single relation phrase this refinement enables the model to readily handle relation phrases containing multiple verbsa consequence of this pattern is that the relation phrase must be a contiguous span of words in the sentencethe syntactic constraint eliminates the incoherent relation phrases returned by existing systemsfor example given the sentence extendicare agreed to buy arbor health care for about us 432 million in cash and assumed debttextrunner returns the extraction the phrase for assumed is clearly not a valid relation phrase it begins with a preposition and splices together two distant words in the sentencethe syntactic constraint prevents this type of error by simply restricting relation phrases to match the pattern in figure 1the syntactic constraint reduces uninformative extractions by capturing relation phrases expressed via lvcsfor example the pos pattern matched against the sentence faust made a deal with the devil would result in the relation phrase made a deal with instead of the uninformative madefinally we require the relation phrase to appear between its two arguments in the sentencethis is a common constraint that has been implicitly enforced in other open extractors terms of dependency parse features that would capture the noncontiguous relation phrases in table 3previous work has shown that dependency paths do indeed boost the recall of relation extraction systems while using dependency path features allows for a more flexible model of relations it significantly increases processing time which is problematic for webscale extractionfurther we have found that this increased recall comes at the cost of lower precision on web text the results in table 3 are similar to banko and etzionis findings that a set of eight pos patterns cover a large fraction of binary verbal relation phraseshowever their analysis was based on a set of sentences known to contain either a company acquisition or birthplace relationship while our results are on a random sample of web sentenceswe applied banko and etzionis verbal patterns to our random sample of 300 web sentences and found that they cover approximately 69 of the relation phrases in the corpusthe gap in recall between this and the 85 shown in table 3 is largely due to lvc relation phrases and phrases containing multiple verbs which their patterns do not coverin sum our model is by no means completehowever we have empirically shown that the majority of binary verbal relation phrases in a sample of web sentences are captured by our modelby focusing on this subset of language our model can be used to perform open ie at significantly higher precision than beforethis section introduces reverb a novel open extractor based on the constraints defined in the previous sectionreverb first identifies relation phrases that satisfy the syntactic and lexical constraints and then finds a pair of np arguments for each identified relation phrasethe resulting extractions are then assigned a confidence score using a logistic regression classifierthis algorithm differs in three important ways from previous methods first the relation phrase is identified holistically rather than wordbywordsecond potential phrases are filtered based on statistics over a large corpus finally reverb is relation first rather than arguments first which enables it to avoid a common error made by previous methodsconfusing a noun in the relation phrase for an argument eg the noun deal in made a deal withreverb takes as input a postagged and npchunked sentence and returns a set of extraction triples2 given an input sentence s reverb uses the following extraction algorithm we check whether a candidate relation phrase r satisfies the syntactic constraint by matching it against the regular expression in figure 1to determine whether r satisfies the lexical constraint we use a large dictionary d of relation phrases that are known to take many distinct argumentsin an offline step we construct d by finding all matches of the pos pattern in a corpus of 500 million web sentencesfor each matching relation phrase we heuristically identify its arguments we set d to be the set of all relation phrases that take at least k distinct argument pairs in the set of extractionsin order to allow for minor variations in relation phrases we normalize each relation phrase by removing inflection auxiliary verbs adjectives and adverbsbased on experiments on a heldout set of sentences we found that a value of k 20 works well for filtering out overspecified relationsthis results in a set of approximately 17 million distinct normalized relation phrases which are stored in memory at extraction timeas an example of the extraction algorithm in action consider the following input sentence hudson was born in hampstead which is a suburb of londonstep 1 of the algorithm identifies three relation phrases that satisfy the syntactic and lexical constraints was born in and is a suburb ofthe first two phrases are adjacent in the sentence so they are merged into the single relation phrase was born instep 2 then finds an argument pair for each relation phrasefor was born in the nearest nps are for is a suburb of the extractor skips over the np which and chooses the argument pair the final output is the extraction algorithm in the previous section has high recall but low precisionlike with previous open extractors we want way to trade recall for precision by tuning a confidence thresholdwe use a logistic regression classifier to assign a confidence score to each extraction which uses the features shown in table 4all of these features are efficiently computable and relation independentwe trained the confidence function by manually labeling the extractions from a set of 1 000 sentences from the web and wikipedia as correct or incorrectprevious open extractors require labeled training data to learn a model of relations which is then used to extract relation phrases from textin contrast reverb uses a specified model of relations for extraction and requires labeled data only for assigning confidence scores to its extractionslearning a confidence function is a much simpler task than learning a full model of relations using two orders of magnitude fewer training examples than textrunner or woethe model of relation phrases used by reverb is specified but could a textrunnerlike system learn this model from training datawhile it is difficult to answer such a question for all possible permutations of features sets training examples and learning biases we demonstrate that textrunner itself cannot learn reverbs model even when retrained using the output of reverb as labeled training datathe resulting system textrunnerr uses the same feature representation as textrunner but different parameters and a different set of training examplesto generate positive instances we ran reverb on the penn treebank which is the same dataset that textrunner is trained onto generate negative instances from a sentence we took each noun phrase pair in the sentence that does not appear as arguments in a reverb extractionthis process resulted in a set of 67 562 positive instances and 356834 negative instanceswe then passed these labeled examples to textrunners training procedure which learns a linearchain crf using closedclass features like pos tags capitalization punctuation etctextrunnerr uses the argumentfirst extraction algorithm described in section 2we compare reverb to the following systems each system is given a set of sentences as input and returns a set of binary extractions as outputwe created a test set of 500 sentences sampled from the web using yahoos random link service3 after running each extractor over the input sentences two human judges independently evaluated each extraction as correct or incorrectthe judges reached agreement on 86 of the extractions with an agreement score of n 068we report results on the subset of the data where the two judges concurthe judges labeled uninformative extractions conservativelythat is if critical information was dropped from the relation phrase but included in the second argument it is labeled correctfor example both the extractions and are considered correcteach system returns confidence scores for its extractionsfor a given threshold we can measure the precision and recall of the outputprecision is the fraction of returned extractions that are correctrecall is the fraction of correct extractions in the corpus that are returnedwe use the total number of extractions labeled as correct by the judges as our measure of recall for the corpusin order to avoid doublecounting we treat extractions that differ superficially as a single extractionwe compute a precisionrecall curve by varying the confidence threshold and then compute the area under the curve figure 2 shows the auc of each systemreverb achieves an auc that is 30 higher than woeparse and is more than double the auc of woepos or textrunnerthe lexical constraint provides a significant boost in performance with reverb achieving an auc 23 higher than reverblexreverb proves to be a useful source of training data with textrunnerr having an auc 71 higher than textrunner and performing on par with woeposfrom the training data textrunnerr was able to learn a model that predicts contiguous relation phrases but still returned incoherent relation phrases and overspecified relation phrasesthese errors are due to textrunnerr overfitting the training data and not having access to the lexical constraintfigure 3 shows the precisionrecall curves of the systems introduced in this papertextrunnerr has much lower precision than reverb and reverblex at all levels of recallthe lexical constraint gives reverb a boost in precision over reverblex reducing overspecified extractions from 20 of reverblexs output to 1 of reverbsthe lexical constraint also boosts recall over reverblex since reverb is able to find a correct relation phrase where reverblex finds an overspecified onefigure 4 shows the precisionrecall curves of reverb and the external systemsreverb has much higher precision than the other systems at nearly all levels of recallin particular more than 30 of reverbs extractions are at precision 08 or higher compared to virtually none for the other systemswoeparse achieves a slightly higher recall than reverb but at the cost of lower precisionin order to highlight the role of the relational model of each system we also evaluate their performance on the subtask of extracting just the relation phrases from the input textfigure 5 shows the precisionrecall curves for each system on the relation phraseonly evaluationin this case reverb has both higher precision and recall than the other systemsreverbs biggest improvement came from the elimination of incoherent extractionsincoherent extractions were a large fraction of the errors made by previous systems accounting for approximately 13 of textrunners extractions 15 of woeposs and 30 of woeparsesuninformative tion extractions had a smaller effect on other systems precision accounting for 4 of woepquot3s extractions 5 of woepo3s and 7 of textrunners while only appearing in 1 of reverbs extractionsreverbs reduction in uninformative extractions resulted in a boost in recall capturing many lvc relation phrases missed by other systems to test the systems speed we ran each extractor on a set of 100 000 sentences using a pentium 4 machine with 4gb of ramthe processing times were 16 minutes for reverb 21 minutes for textrunner 21 minutes for woepo3 and 11 hours for woepquotthe times for reverb textrunner and woepo3 are all approximately the same since they all use the same postagging and npchunking softwarewoep3 processes each sentence with a dependency parser resulting in much longer processing time52 reverb error analysis to better understand the limitations of reverb we performed a detailed analysis of its errors in precision and its errors in recall table 5 summarizes the types of incorrect extractions that reverb returnswe found that 65 of the incorrect extractions returned by reverb were cases where a relation phrase was correctly identified but the argumentfinding heuristics failedthe remaining errors were cases where reverb extracted an incorrect relation phraseone common mistake that reverb made was extracting a relation phrase that expresses an nary relationship via a ditransitive verbfor example given the sentence table 6 the majority of extractions that were missed by reverb were cases where the correct relation phrase was found but the arguments were not correctly identifiedi gave him 15 photographs reverb extracts these errors are due to the fact that reverb only models binary relationstable 6 summarizes the correct extractions that were extracted by other systems and were not extracted by reverbas with the false positive extractions the majority of false negatives were due to the argumentfinding heuristics choosing the wrong arguments or failing to extract all possible arguments other sources of failure were due to the lexical constraint either failing to filter out an overspecified relation phrase or filtering out a valid relation phrasethese errors hurt both precision and recall since each case results in the extractor overlooking a correct relation phrase and choosing another53 evaluation at scale section 51 shows that reverb outperforms existing open ie systems when evaluated on a sample of sentencesprevious work has shown that the frequency of an extraction in a large corpus is useful for assessing the correctness of extractions thus it is possible a priori that reverbs gains over previous systems will diminish when extraction frequency is taken into accountin fact we found that reverbs advantage over textrunner when run at scale is qualitatively similar to its advantage on single sentenceswe ran both reverb and textrunner on banko and etzionis corpus of 500 million web sentences and examined the effect of redundancy on precisionas downeys work predicts precision increased in both systems for extractions found multiple times compared with extractions found only oncehowever reverb had higher precision than 1543 textrunner at all frequency thresholdsin fact reverbs frequency 1 extractions had a precision of 075 which textrunner could not approach even with frequency 10 extractions which had a precision of 034thus reverb is able to return more correct extractions at a higher precision than textrunner even when redundancy is taken into accountthe papers contributions are as follows we have identified and analyzed the problems of incoherent and uninformative extractions for open ie systems and shown their prevalence for systems such as textrunner and woe we articulated general easytoenforce constraints on binary verbbased relation phrases in english that ameliorate these problems and yield richer and more informative relations based on these constraints we designed implemented and evaluated the reverb extractor which substantially outperforms previous open ie systems in both recall and precision we make reverb and the data used in our experiments available to the research community4 in future work we plan to explore utilizing our constraints to improve the performance of learned crf modelsroth et al have shown how to incorporate constraints into crf learners it is natural then to consider whether the combination of heuristically labeled training examples crf learning and our constraints will result in superior performancethe error analysis in section 52 also suggests natural directions for future workfor instance since many of reverbs errors are due to incorrect arguments improved methods for argument extraction are in orderwe would like to thank mausam dan weld yoav artzi luke zettlemoyer members of the knowitall group and the anonymous reviewers for their helpful commentsthis research was supported in part by nsf grant iis0803481 onr grant n000140810431 and darpa contract fa875009c0179 and carried out at the university of washingtons turing center
D11-1142
identifying relations for open information extractionopen information extraction is the task of extracting assertions from massive corpora without requiring a prespecified vocabularythis paper shows that the output of stateoftheart open ie systems is rife with uninformative and incoherent extractionsto overcome these problems we introduce two simple syntactic and lexical constraints on binary relations expressed by verbswe implemented the constraints in the reverb open ie system which more than doubles the area under the precisionrecall curve relative to previous extractors such as textrunner and woeposmore than 30 of reverbs extractions are at precision 08 or higher compared to virtually none for earlier systemsthe paper concludes with a detailed analysis of reverbs errors suggesting directions for future workwe show that verbal phrases uncover a large fraction of binary predicates while reducing the amount of noisy phrases that do not denote any relationswe develop a large scale webbased reverb corpus comprising tuple extractions of predicate templates with their argument instantiationsour reverb corpus is a large scale publicly available web based open extractions data set containing about 15 million unique template extractions automatically extracted from the clueweb09 web crawl
a comparison of vectorbased representations for semantic composition in this paper we address the problem of modeling compositional meaning for phrases and sentences using distributional methods we experiment with several possible combinations of representation and composition exhibiting varying degrees of sophistication some are shallow while others operate over syntactic structure rely on parameter learning or require access to very large corpora we find that shallow approaches are as good as more computationally intensive alternatives with regards to two particular tests phrase similarity and paraphrase detection the sizes of the involved training corpora and the generated vectors are not as important as the fit between the meaning representation and compositional method distributional models of semantics have seen considerable success at simulating a wide range of behavioral data in tasks involving semantic cognition and also in practical applicationsfor example they have been used to model judgments of semantic similarity and association and have been shown to achieve human level performance on synonymy tests such as those included in the test of english as a foreign language this ability has been put to practical use in numerous natural language processing tasks such as automatic thesaurus extraction word sense discrimination language modeling and the identification of analogical relations while much research has been directed at the most effective ways of constructing representations for individual words there has been far less consensus regarding the representation of larger constructions such as phrases and sentencesthe problem has received some attention in the connectionist literature particularly in response to criticisms of the ability of connectionist representations to handle complex structures more recently several proposals have been put forward for computing the meaning of word combinations in vector spacesthis renewed interest is partly due to the popularity of distributional methods and their application potential to tasks that require an understanding of larger phrases or complete sentencesfor example mitchell and lapata introduce a general framework for studying vector composition which they formulate as a function f of two vectors you and v different composition models arise depending on how f is chosenassuming that composition is a linear function of the cartesian product of you and v allows to specify additive models which are by far the most common method of vector combination in the literature alternatively assuming that composition is a linear function of the tensor product of you and v gives rise to models based on multiplicationone of the most sophisticated proposals for semantic composition is that of clark et al and the more recent implementation of grefenstette and sadrzadeh using techniques from logic category theory and quantum information they develop a compositional distributional semantics that brings typelogical and distributional vector space models togetherin their framework words belong to different typebased categories and different categories exist in different dimensional spacesthe category of a word is decided by the number and type of adjoints it can take and the composition of a sentence results in a vector which exists in sentential spaceverbs adjectives and adverbs act as relational functions are represented by matrices and modify the properties of nouns that are represented by vectors for a proposal similar in spiritclarke introduces contexttheoretic semantics a general framework for combining vector representations based on a mathematical theory of meaning as context and shows that it can be used to describe a variety of models including that of clark et al socher et al and socher et al present a framework based on recursive neural networks that learns vector space representations for multiword phrases and sentencesthe network is given a list of word vectors as input and a binary tree representing their syntactic structurethen it computes an ndimensional representation p of two ndimensional children and the process is repeated at every parent node until a representation for a full tree is constructedparent representations are computed essentially by concatenating the representations of their childrenduring training the model tries to minimize the reconstruction errors between the ndimensional parent vectors and those representing their childrenthis model can also compute compositional representations when the tree structure is not given eg by greedily inferring a binary treealthough the type of function used for vector composition has attracted much attention relatively less emphasis has been placed on the basic distributional representations on which the composition functions operatein this paper we examine three types of distributional representation of increasing sophistication and their effect on semantic compositionthese include a simple semantic space where a words vector represents its cooccurrence with neighboring words a syntaxaware space based on weighted distributional tuples that encode typed cooccurrence relations among words and word embeddings computed with a neural language model word embeddings are distributed representations lowdimensional and realvaluedeach dimension of the embedding represents a latent feature of the word hopefully capturing useful syntactic and semantic propertiesusing these representations we construct several compositional models based on addition multiplication and recursive neural networkswe assess the effectiveness of these models using two evaluation protocolsthe first one involves modeling similarity judgments for short phrases gathered in human experiments the second one is paraphrase detection ie the task of examining two sentences and determining whether they have the same meaning we find that shallow approaches are as good as more computationally intensive alternativesthey achieve considerable semantic expressivity without any learning sophisticated linguistic processing or access to very large corporaour contributions in this work are threefold an empirical comparison of a broad range of compositional models some of which are introduced here for the first time the use of an evaluation methodology that takes into account the full spectrum of compositionality from phrases to sentences and the empirical finding that relatively simple compositional models can be used to perform competitively on the paraphrase detection and phrase similarity tasksthe elementary objects that we operate on are vectors associated with wordswe instantiate these word representations following three distinct semantic space models which we describe in section 21 belowanalogously in section 22 we consider three methods of vector composition ie how a phrase or a sentence can be represented as a vector using the vectors of its constituent wordscombining different vector representations and composition methods gives rise to several compositional models whose performance we evaluate in sections 3 and 4for all of our experiments we employ column vectors from a cartesian finitelydimensional spacethe dimensionality will depend on the source of the vectors involvedsimilarly the component values inside each sources vectors are not to be interpreted in the same mannernonetheless they have in common that they originate from distributive corpus statistics meaning is commonly represented in a highdimensional space where each component corresponds to some contextual element in which the word is foundthe contextual elements can be words themselves or larger linguistic units such as sentences or documents or even more complex linguistic representations such as the argument slots of predicatesa semantic space that is often employed in studying compositionality across a variety of tasks uses a context window of five words on either side of the target word and 2000 vector dimensionsthese are the common context words in the british national corpus a corpus of about 100 million tokenstheir values are set to the ratio of the probability of the context word given the target word to the probability of the context word overallmore formally let us consider the bnc as a set of sentences ni from the bncs vocabulary vocbncthen f reqw is the amount of times that each word w vocbnc appears in the bncmitchell and lapata collect the m most frequent nonstoplist words in the set ctxttop w1 wm and let them consitute the word vectors dimensionseach dimensions value is obtained from a cooccurrence count for w vocbnc and j 1m using these counts they define word vectors componentwise for j 1m where totalcount is the total number of words in the bncthis space is relatively simple it has few parameters requires no preprocessing other than tokenization and involves no syntactic information or parameter learningdespite its simplicity it is a good starting point for studying representations for compositional models as a baseline against which to evaluate more elaborate modelsneural language model another perhaps less wellknown approach to meaning representation is to represent words as continuous vectors of parameterssuch word vectors can be obtained with an unsupervised neural language model collobert and weston which jointly learns an embedding of words into a vector space and uses these vectors to predict how likely a word is given its contextwe induced word embeddings with collobert and weston s neural language modelthe model is discriminative and nonprobabilisticeach word i d is embedded into a ddimensional space using a lookup table ltw where w rdd is a matrix of parameters to be learnedwi rd is the ith column of w and d is the word vector size to be chosen by the userthe parameters w are automatically trained during the learning process using backpropagationspecifically at each training update the model reads an ngram x from the corpusthe ngram is paired with a corrupted ngram x where wn 6 wn is chosen uniformly from the vocabularythe model concatenates the learned embeddings of the n words and predicts a score for the ngram sequence using the learned embeddings as featuresthe training criterion is that ngrams that are present in the training corpus must have a score at least some margin higher than the corrupted ngramsthe model learns via gradient descent over the neural network parameters and the embedding lookup tableword vectors are stored in a word embedding matrix which captures syntactic and semantic information from cooccurrence statisticsas these representations are learned albeit in an unsupervised manner one would hope that they capture word meanings more succinctly compared to the simpler distributional representations that are merely based on cooccurrencewe trained the neural language model on the bncwe optimized the models parameters on a word similarity task using 4 of the bnc as development dataspecifically we used wordsim353 a benchmark dataset consisting of relatedness judgments for 353 word pairswe experimented with vectors of varying dimensionality the size of the target words context window was 2 3 and 4 in turnthe rate at which embeddings were learned ranged from 34 x 1010 to 67 x 1010 to 109we ran each training process for 11 x 108 to 27 x 108 iterations we obtained the best results with 50 dimensions a context window of size 4 and a embedding learning rate of 109the nlm with these parameters was then trained for 151x109 iterations figure 1 illustrates a twodimensional projection of the embeddings for the 500 most common words in the bncwe only show two out of the actual 50 dimensions involved but one can already begin to see clusterings of a syntactic and semantic naturein one corner for example we encounter a grouping of possessive pronouns together with the possessive clitic sthe singular ones my her and his are closely positioned as are the plural ones our your and theiralso there is a clustering of sociopolitical terms such as international country national government and councildistributional memory tensor baroni and lenci present distributional memory a generalized framework for distributional semantics from which several specialpurpose models can be derivedin their framework distributional information is extracted from the corpus once in the form of a set of weighted wordlinkword tuples arranged into a thirdorder tensordifferent matrices are then generated from the tensor and their rows and columns give rise to different semantic spaces appropriate for capturing different semantic problemsin this way the same distributional information can be shared across tasks such as word similarity or analogical learningmore formally baroni and lenci construct a 3dimensional tensor t assigning a value c to instances of word pairs wv and a connecting linkword l this representation operates over a dependencyparsed corpus and the scores c are obtained via counting the occurrences of tuples and weighting the raw counts by mutual informationtable 1 presents examples of tensor entriesthese were taken from a distributional memory tensor1 that baroni and lenci obtained via preprocessing several corpora the webderived ukwac corpus of about 1915 billion words a mid2009 dump of the english wikipedia containing about 820 million words and the bncextracting a 3dimensional tensor from the bnc alone would create very sparse representationswe therefore extract socalled wordfibres essentially projections onto a lowerdimensional subspace from the same tensor baroni and lenci collectively derived from the 3 billion word corpus just described we view the 3dimensional tensor as a mapping which assigns each target word w a nonzero value c given the context all wordcontext combinations not listed in t are implicitly assigned a zero valuenow we consider two possible approaches for obtaining vectors depending on their applicationfirst we let the d most frequent contexts constitute the d dimensions that each word vector will havetable 2 shows the 11 contexts that appear most frequently in t thus each target words vector is defined componentwise as for j 1d this approach is used when a fixed vector dimensionality is necessarya more dynamic approach is possible when very few words w1wn are involved in a testtheir representations can then have a denser format that is with no zerovalued componentsfor this we identify the set of contexts common to the words involved ctxtdyn each context again constitutes a vector dimensionthe dimensionality varies strongly depending on the selection of words but if n does not exceed 4 the dimensionality ctxtdyn will typically be substantial enoughin this approach each words vector consists of the values c found along with that word and its context in the tensorin our experiments we compose word vectors to create representations for phrase vectors and sentence vectorsthe phrases we are interested in consist of two words each an adjective and a noun like black hair a compound noun made up of two nouns such as oil industry or a verbal phrase with a transitive verb and an object noun eg pour teaconceiving of a phrase phr as a binary tuple of words we obtain its vector from its words vectors either by addition in the same way we acquire a vector senveci representing a sentence seni ni from the vectors for w1wniwe simply sum the existing word vectors that is vectors obtained via the respective corpus for words that are not on our stoplist and do the same with pointwise multiplication the multiplication model in can be seen as an instantiation of the categorical compositional framework put forward by clark et al in fact a variety of multiplicationbased models can be derived from this framework and comparisons against componentwise multiplication on phrase similarity tasks yield comparable results we thus opt for the model as an example of compositional models based on multiplication due to its good performance across a variety of tasks including language modeling and prediction of reading difficulty our third method for creating phrase and sentence vectors alike is the application of socher et al s modelthey use the stanford parser to create a binary parse tree for each input phrase or sentencethis tree is then used as the basis for a deep recursive autoencoder the aim is to construct a vector representation for the trees root bottomup where the leaves contain word vectorsthe latter can in theory be provided by any type of semantic space however socher et al use word embeddings provided by the neural language model given the binary tree input structure the model computes parent representations p from their children using a standard neural network layer where c1c2 is the concatenation of the two children f is an elementwise activation function such as tanh b is a bias term and w e rnx2n is an encoding matrix that we want to learn during trainingone way of assessing how well p represents its direct children is to decode their vectors in a reconstruction layer during training the goal is to minimize the reconstruction errors of all input pairs at nonterminal nodes p in a given parse tree by computing the square of the euclidean distance between the original input and its reconstruction socher et al extend the standard recursive autoencoder sketched above in two waysfirstly they present an unfolding autoencoder that tries to reconstruct all leaf nodes underneath each node rather than only its direct childrenand secondly instead of transforming the two children directly into a parent p they introduce another hidden layer inbetweenwe obtained three compositional models per representation resulting in nine compositional models overallplugging different representations into the additive and multiplicative models is relatively straightforwardthe rae can also be used with arbitrary word vectorssocher et al obtain best results with 100dimensional vectors which we also used in our experimentsnlm vectors were trained with this dimensionality on the bnc for 79 x 108 iterations we constructed a simple distributional space with m 100 dimensions ie those connected to the 100 most frequent cooccurrence wordsin the case of vectors obtained from baroni and lenci s dm tensor we differentiated between phrases and sentences due to the disparate amount of words contained in them to represent phrases we used vectors of dynamic dimensionality since these form a richer and denser representationthe sentences considered in section 4 are too large for this approach and all word vectors must be members of the same vector spacehence these sentence vectors have fixed dimensionality d 100 consisting of the most significant 100 dimensions ie those reflecting the 100 most frequent contextsour first experiment focused on modeling similarity judgments for short phrases gathered in human experimentsdistributional representations of individual words are commonly evaluated on tasks based on their ability to model semantic similarity relations eg synonymy or primingthus it seems appropriate to evaluate phrase representations in a similar mannerspecifically we used the dataset from mitchell and lapata which contains similarity judgments for adjectivenoun nounnoun and verbobject phrases respectively2 each item is a phrase pair phr1 phr2 which has a human rating from 1 to 7 using the composition models described above we compute the cosine similarity of phr1 and phr2 model similarities were evaluated against the human similarity ratings using spearmans p correlation coefficienttable 3 summarizes the performance of the various models on the phrase similarity datasetrows in the table correspond to different vector representations the simple distributional semantic space from mitchell and lapata baroni and lencis distributional memory tensor and the neural language model for each phrase combination adjective noun nounnoun and verb object for each phrase type we report results for each compositional model namely additive multiplicative and recursive autoencoder the table also shows the dimensionality of the input vectors next to the vector representationas can be seen for sds the best performing model is multiplication as it is mostly for dmwith regard to nlm vector addition yields overall better resultsin general neither dm or nlm in any compositional configuration are able to outperform sds with multiplicationall models in table 3 are significantly correlated with the human similarity judgments spearmans p differences of 03 or more are significant at the 001 level using a ttest although the phrase similarity task gives a fairly direct insight into semantic similarity and compositional representations it is somewhat limited in scope as it only considers twoword constructions rather than naturally occurring sentencesideally we would like to augment our evaluation with a task which is based on large quantities of natural data and for which vector composition has practical consequencesfor these reasons we used the microsoft research paraphrase corpus introduced by dolan et al the corpus consists of sentence pairs seni1seni2 and labels indicating whether they are in a paraphrase relationship or notthe vector representations obtained from our various compositional models were used as features for the paraphrase classification taskthe msrpc dataset contains 5801 sentence pairs we used the standard split of 4076 training pairs and 1725 test pairs in order to judge whether two sentences have the same meaning we employ fan et al s liblinear classifierfor each of our three vector sources and three different compositional methods we create the following features a vector representing the pair of input sentences either via concatenation or subtraction a vector encoding which words appear therein and a vector made up of the following four other pieces of information the cosine similarity of the sentence vectors the length of seni1 the length of seni2 and the unigram overlap among the two sentencesin order to encode which words appear in each sentence and how often we define a vector wdcounti for sentence seni and enumerate all words occuring in the msrpc giving the word count vectors nmsrpc dimensionsthus the kth component of wdcounti is the frequency with which the word w appears in for k 1nmsrpceven though nmsrpc may be large the computer files storing our feature vectors do not explode in size because wdcount contains many zeros and the classifier allows a sparse notation of feature valuesregarding the last four features we measured the similarity between sentences the same way as we did with phrases in section 3note that this is the cosine of the angle between senveci1 and senveci2this enables us to observe the similarity or dissimilarity of two sentences independent of their sentence lengtheven though each contained word increases or decreases the norm of the resulting sentence vector this does not distort the overall similarity value due to normalizationthe lengths of seni1 and seni2 are simply the number of words they containthe unigram overlap feature value may be viewed as the cardinality of the intersection of each sentences multisetbagofwordsthe latter is encoded in the alreadyintroduced wdcount vectorstherefore in order to establish which features work best for each representation and composition method we exhaustively explored all combinations on a development set tables 4 and 5 show our results on the test set with the best feature combinations for each model each row corresponds to a different type of composition and each column to a different word representation modelas can be seen the distributional memory is the best performing representation for the additive composition modelthe neural language model gives best results for the recursive autoencoder although the other two representations come closeand finally the simple distributional semantic space works best with multiplicationalso note that the best performing models namely dm with addition and sds with multiplication use a basic feature space consisting only of the cosine similarity of the composed sentence vectors the length of the two sentences involved and their unigram word overlapalthough our intention was to use the paraphrase detection task as a testbed for evaluating compositional models rather than achieving stateoftheart results table 6 compares our approach against previous work on the same task and datasetinitial research concentrated on individual words rather than sentential representationsseveral approaches used wordnet in conjunction with distributional similarity in an attempt to detect meaning conveyed by synonymous words more recently the addition of syntactic features based on dependency parse trees has been shown to substantially boost performancethe model of das and smith for example uses quasisynchronous dependency grammar to model the structure of the sentences involved in the comparison and their correspondencessocher et al obtain an accuracy that is higher than previously published resultsthis model is more sophisticated than the one we used in our experiments rather than using the output of the rae as features for the classifier it applies dynamic pooling a procedure that takes a similarity matrix as input and maps it to a matrix of fixed size that represents more faithfully the global similarity structure3 overall we observe that our own models do as well as some of the models that employ wordnet and more sophisticated syntactic featureswith regard to f1 we are comparable with das and smith and socher et al without using elaborate features or any additional manipulations over and above the output of the composition functions 3without dynamic pooling their model yields an accuracy of 742 which if added could increase performancein this paper we systematically compared three types of distributional representation and their effect on semantic compositionour comparisons involved a simple distributional semantic space word embeddings computed with a neural language model and a representation based on weighted wordlinkword tuples arranged into a thirdorder tensor these representations vary in many respects the amount of preprocessing and linguistic information involved whether the semantic space is the byproduct of a learning process and data requirements these representations served as input to three composition methods involving addition multiplication and a deep recursive autoencoderagain these methods differ in terms of how they implement compositionality addition and multiplication are commutative and associative operations and thus ignore word order and more generally syntactic structurein contrast the recursive autoencoder is syntaxaware as it operates over a parse treehowever the composed representations must be learned with a neural networkwe evaluated nine models on the complementary tasks of phrase similarity and paraphrase detectionthe former task simplifies the challenge of finding an adequate method of composition and places more emphasis on the representation whereas the latter poses in a sense the ultimate challenge for composition modelsit involves entire sentences exhibiting varied syntactic constructions and in the limit involves genuine natural language undertandingacross both tasks our results deliver a consistent message simple is bestdespite being in theory more expressive the representations obtained by the neural language model and the thirdorder tensor cannot match the simple semantic space on the phrase similarity taskin this task syntaxoblivious composition models are superior to the more sophisticated recursive autoencoderthe latter performs better on the paraphrase detection task when its output is fed to a classifierthe simple semantic space may not take word order or sentence structure into account but nevertheless achieves considerable semantic expressivity it is on par with the thirdorder tensor without having access to as much data or a syntactically parsed corpuswhat do these findings tell us about the future of compositional models for distributional semanticsthe problem of finding the right methods of vector composition cannot be pursued independent of the choice of lexical representationhaving tested many model combinations we argue that in a good model of distributive semantics representation and composition must go hand in hand ie they must be mutually learnedacknowledgments we are grateful to jeff mitchell for his help with the reimplementation of his modelsthanks to frank keller and micha elsner for their input on earlier versions of this work and to richard socher for technical assistancewe acknowledge the support of epsrc through project grant epi0329161
D12-1050
a comparison of vectorbased representations for semantic compositionin this paper we address the problem of modeling compositional meaning for phrases and sentences using distributional methodswe experiment with several possible combinations of representation and composition exhibiting varying degrees of sophisticationsome are shallow while others operate over syntactic structure rely on parameter learning or require access to very large corporawe find that shallow approaches are as good as more computationally intensive alternatives with regards to two particular tests phrase similarity and paraphrase detectionthe sizes of the involved training corpora and the generated vectors are not as important as the fit between the meaning representation and compositional methodwe compute a weighted linear combination of the embeddings for words that appear in the document to be classifiedwe compare count and predict representations as input to composition functionsfor paraphrase detection we use cosine similarity between sentence pairs together with two shallow similarity cues word overlap between the two sentences and difference in sentence lengthadd and mult attained the top performance with the simple models for both figures of merit
a transitionbased system for joint partofspeech tagging and labeled nonprojective dependency parsing most current dependency parsers presuppose that input words have been morphologically disambiguated using a partofspeech tagger before parsing begins we present a transitionbased system for joint partofspeech tagging and labeled dependency parsing with nonprojective trees experimental evaluation on chinese czech english and german shows consistent improvements in both tagging and parsing accuracy when compared to a pipeline system which lead to improved stateoftheart results for all languages dependencybased syntactic parsing has been the focus of intense research efforts during the last decade and the state of the art today is represented by globally normalized discriminative models that are induced using structured learninggraphbased models parameterize the parsing problem by the structure of the dependency graph and normally use dynamic programming for inference but other inference methods have been explored especially for nonprojective parsing transitionbased models parameterize the problem by elementary parsing actions and typically use incremental beam search despite notable differences in model structure graphbased and transitionbased parsers both give stateoftheart accuracy with proper feature selection and optimization it is noteworthy however that almost all dependency parsers presuppose that the words of an input sentence have been morphologically disambiguated using a partofspeech taggerthis is in stark contrast to the best parsers based on pcfg models such as the brown parser and the berkeley parser which not only can perform their own partofspeech tagging but normally give better parsing accuracy when they are allowed to do sothis suggests that joint models for tagging and parsing might improve accuracy also in the case of dependency parsingit has been argued that joint morphological and syntactic disambiguation is especially important for richly inflected languages where there is considerable interaction between morphology and syntax such that neither can be fully disambiguated without considering the otherthus lee et al show that a discriminative model for joint morphological disambiguation and dependency parsing outperforms a pipeline model in experiments on latin ancient greek czech and hungarianhowever li et al and hatori et al report improvements with a joint model also for chinese which is not a richly inflected language but is nevertheless rich in partofspeech ambiguitiesin this paper we present a transitionbased model for joint partofspeech tagging and labeled dependency parsing with nonprojective treesexperiments show that joint modeling improves both tagging and parsing accuracy leading to stateoftheart accuracy for richly inflected languages like czech and german as well as more configurational languages like chinese and englishto our knowledge this is the first joint system that performs labeled dependency parsingit is also the first joint system that achieves stateoftheart accuracy for nonprojective dependency parsingtransitionbased dependency parsing was pioneered by yamada and matsumoto and nivre et al who used classifiers trained to predict individual actions of a deterministic shiftreduce parserrecent research has shown that better accuracy can be achieved by using beam search and optimizing models on the entire sequence of decisions needed to parse a sentence instead of single actions in addition a number of different transition systems have been proposed in particular for dealing with nonprojective dependencies which were beyond the scope of early systems in this section we start by defining a transition system for joint tagging and parsing based on the nonprojective transition system proposed in nivre we then show how to perform beam search and structured online learning with this model and conclude by discussing feature representationsgiven a set p of partofspeech tags and a set d of dependency labels a tagged dependency tree for a sentence x w1 wn is a directed tree t with labeling functions 7r and 6 such that the set vx of nodes is the set of positive integers up to and including n each corresponding to the linear position of a word in the sentence plus an extra artificial root node 0the set a of arcs is a set of pairs where i is the head node and j is the dependent nodethe functions 7r and 6 assign a unique partofspeech label to each nodeword and a unique dependency label to each arc respectivelythis notion of dependency tree differs from the standard definition only by including partofspeech labels as well as dependency labels following nivre we define a transition system for dependency parsing as a quadruple 5 where a transition sequence for a sentence x in 5 is a sequence of configurationtransition pairs c0m in this paper we take the set c of configurations to be the set of all 5tuples c such that e and b are disjoint sublists of the nodes vx of some sentence x a is a set of dependency arcs over vx and 7r and 6 are labeling functions as defined abovewe take the initial configuration for a sentence x w1 wn to be cs where l is the function that is undefined for all arguments and we take the set ct of terminal configurations to be the set of all configurations of the form c the tagged dependency tree defined for x by c is the tree with labeling functions 7r and 6 which we write treethe set t of transitions is shown in figure 1the leftarcd and rightarcd transitions both add an arc between the two nodes on top of the stack and replaces these nodes by the head node of the new arc the shiftp transition extracts the first node in the buffer pushes it onto the stack and labels it with the partofspeech tag p the swap transition extracts the second topmost node from the stack and moves it back to the buffer subject to the condition that the two top nodes on the stack are still in the order given by the sentenceexcept for the addition of a tag parameter p to the shift transition this is equivalent to the system described in nivre which thanks to the swap transition can handle arbitrary nonprojective treesthe soundness and completeness results given in that paper trivially carry over to the new systemthe only thing to note is that before a terminal configuration can be reached every word has to be pushed onto the stack in a shiftp transition which ensures that every nodeword in the output tree will be taggedwhile early transitionbased parsers generally used greedy bestfirst inference and locally trained classifiers recent work has shown that higher accuracy can be obtained using beam search and global structure learning to mitigate error propagationin particular it seems that the globally learned models can exploit a much richer feature space than locally trained classifiers as shown by zhang and nivre since joint tagging and parsing increases the size of the search space and is likely to require novel features we use beam search in combination with structured perceptron learningthe beam search algorithm used to derive the best parse y for a sentence x is outlined in figure 2in addition to the sentence x it takes as input a weight vector w corresponding to a linear model for scoring transitions out of configurations and two prunw and beam parameters b1 and b2the symbols hc hs and hf denote respectively the configuration score and feature representation of a hypothesis h hca denotes the arc set of hc ing parameters b1 and b2a parse hypothesis h is represented by a configuration hc a score hs and a feature vector hf for the transition sequence up to hchypotheses are stored in the list beam which is sorted by descending scores and initialized to hold the hypothesis h0 corresponding to the initial configuration cs with score 00 and all features set to 00 in the main loop a set of new hypotheses is derived and stored in the list tmp which is finally pruned and assigned as the new value of beamthe main loop terminates when all hypotheses in beam contain terminal configurations and the dependency tree extracted from the top scoring hypothesis is returned the set of new hypotheses is created in two nested loops where every hypothesis h in beam is updated using every permissible transition t for the configuration hcthe feature representation of the new hypothesis is obtained by adding the feature vector f for the current configurationtransition pair to the feature vector of the old hypothesis similarly the score of the new hypothesis is the sum of the score f w of the current configurationtransition pair and the score of the old hypothesis the feature representationscore of a complete parse y for x with transition sequence c0m is thus the sum of the feature representationsscores of the configurationtransition pairs in c0m finally the configuration of the new hypothesis is obtained by evaluating t the new hypothesis is then inserted into tmp in scoresorted order the pruning parameters b1 and b2 determine the number of hypotheses allowed in the beam and at the same time control the tradeoff between syntactic and morphological ambiguityfirst we extract the b1 highest scoring hypotheses with distinct dependency treesthen we extract the b2 highest scoring remaining hypotheses which will typically be tagging variants of dependency trees that are already in the beamin this way we prevent the beam from getting filled up with too many tagging variants of the same dependency tree which was found to be harmful in preliminary experimentsone final thing to note about the inference algorithm is that the notion of permissibility for a transition t out of a configuration c can be used to capture not only formal constraints on transitions such as the fact that it is impossible to perform a shiftp transition with an empty buffer or illegal to perform a leftarcd transition with the special root node on top of the stack but also to filter out unlikely dependency labels or tagsthus in the experiments later on we will typically constrain the parser so that shiftp is permissible only if p is one of the k best partofspeech tags with a score no more than α below the score of the 1best tag as determined by a preprocessing taggerwe also filter out instances of leftarcd and rightarcd where d does not occur in the training data for the predicted partofspeech tag combination of the head and dependentthis procedure leads to a significant speed upin order to learn a weight vector w from a training set 1 j1 of sentences with their tagged dependency trees we use a variant of the structured perceptron introduced by collins which makes n iterations over the training data and updates the weight vector for every sentence xj where the highest scoring parse y is different from yjmore precisely we use the passiveaggressive update of crammer et al where we also use the early update strategy found beneficial for parsing in several previous studies which means that during learning we terminate the beam search as soon as the hypothesis corresponding to the gold parse yj falls out of the beam and update with respect to the partial transition sequence constructed up to that pointfinally we use the standard technique of averaging over all weight vectors as originally proposed by collins as already noted the feature representation f of an input sentence x with parse y decomposes into feature representations f for the transitions t needed to derive y from csfeatures may refer to any aspect of a configuration as encoded in the stack e the buffer b the arc set a and the labelings 7r and s in addition we assume that each word w in the input is assigned up to k candidate partofspeech tags 7ri with corresponding scores s use ei and bi to denote the ith token in the stack e and buffer b respectively with indexing starting at 0 and we use the following functors to extract properties of a token πi ith best tag s score of ith best tag π finally predicted tag w word form pi word prefix of i characters si word suffix of i charactersscore differences are binned in discrete steps of 005the bulk of features used in our system are taken from zhang and nivre although with two important differencesfirst of all like hatori et al we have omitted all features that presuppose an arceager parsing order since our transition system defines an arcstandard ordersecondly any feature that refers to the partofspeech tag of a word w in the buffer b will in our system refer to the topscoring tag π1 rather than the finally predicted tagby contrast for a word in the stack e partofspeech features refer to the tag π chosen when shifting w onto the stack in addition to the standard features for transitionbased dependency parsing we have added features specifically to improve the tagging step in the joint modelthe templates for these features which are specified in figure 3 all involve the ith best tag assigned to the first word of the buffer b in combination with neighboring words word prefixes word suffixes score differences and tag rankfinally in some experiments we make use of two additional feature sets which we call graph features and cluster features respectivelygraph features are defined over the factors of a graphbased dependency parser which was shown to improve the accuracy of a transitionbased parser by zhang and clark however while their features were limited to certain first and secondorder factors we use features over second and thirdorder factors as found in the parsers of bohnet and kuhn these features are scored as soon as the factors are completed using a technique that is similar to what hatori et al call delayed features although they use it for partofspeech tags in the lookahead while we use it for subgraphs of the dependency treecluster features finally are features over word clusters as first used by koo et al which replace partofspeech tag features2 we use a hash kernel to map features to weightsit has been observed that most of the computing time in featurerich parsers is spent retrieving the index of each feature in the weight vector this is usually done via a hash table but significant speedups can be achieved by using a hash kernel which simply replaces table lookup by a hash function the price to pay for these speedups is that there may be collisions so that different features are mapped to the same index but this is often compensated by the fact that the lower time and memory requirements of the hash kernel enables the use of negative features that is features that are never seen in the training set but occur in erroneous hypotheses at training time and can therefore be helpful also at inference timeas a result the hash kernel often improves accuracy as well as efficiency compared to traditional techniques that only make use of features that occur in gold standard parses we have evaluated the model for joint tagging and dependency parsing on four typologically diverse languages chinese czech english and germanmost of the experiments use the conll 2009 data sets with the training development and test split used in the shared task but for better comparison with previous work we also report results for the standard benchmark data sets for chinese and englishfor chinese this is the penn chinese treebank 51 converted with the headfinding rules and conversion tools of zhang and clark and with the same split as in zhang and clark and li et al 3 for english this is the wsj section of the penn treebank converted with the headfinding rules of yamada and matsumoto and the labeling rules of nivre 4 in order to assign kbest partofspeech tags and scores to words in the training set we used a perceptron tagger with 10fold jackknifingthe same type of tagger was trained on the entire training set in order to supply tags for the development and test setsthe feature set of the tagger was optimized for english and german and provides stateoftheart accuracy for these two languagesthe 1best tagging accuracy for section 23 of the penn treebank is 9728 which is on a par with toutanova et al for german we obtain a tagging accuracy of 9724 which is close to the 9739 achieved by the rftagger which to our knowledge is the best tagger for german5 the results are not directly comparable to the rftagger as it was evaluated on a different part of the tiger treebank and trained on a larger part of the treebankwe could not use the larger training set as it contains the test set of the conll 2009 data that we use to evaluate the joint modelfor czech the 1best tagging accuracy is 9911 and for chinese 9265 on the conll 2009 test setwe trained parsers with 25 iterations and report results for the model obtained after the last iterationfor cluster features available only for english and german we used standard brown clusters based on the english and german gigaword corpuswe restricted the vocabulary to words that occur at least 10 times used 800 clusters and took cluster prefixes of length 6 to define featureswe report the following evaluation metrics partofspeech accuracy unlabeled attachment score labeled attachment score and tagged labeled attachment score tlas is a new metric defined as the percentage of words that are assigned the correct partofspeech tag the correct head and the correct dependency labelin line with previous work punctuation is included in the evaluation for the conll data sets but excluded for the two benchmark data setstable 1 presents results on the development sets of the conll 2009 shared task with varying values of the two tag parameters k and α and beam parameters fixed at b1 40 and b2 4we use the combined tlas score on the development set to select the optimal settings for each languagefor chinese we obtain the best result with 3 tags and a threshold of 016 compared to the baseline we observe a pos improvement of 060 and a las improvement of 051for czech we get the best tlas with k 3 and α 02 where pos improves by 006 and las by 046for english the best setting is k 2 and α 01 with a pos improvement of 017 and a las improvement of 062for german finally we see the greatest improvement with k 3 the updated scores later reported due to some improvements of the parserrows 34 baseline and best settings for k and α on development setrows 56 wider beam and added graph features and cluster features second beam parameter b2 fixed at 4 in all cases and α 03 where pos improves by 066 and las by 086table 2 shows the results on the conll 2009 test setsfor all languages except english we obtain stateoftheart results already with bi 40 and for all languages both tagging and parsing accuracy improve compared to the baseline the improvement in tlas is statistically significant with p 001 for all languages row 5 shows the scores with a beam of 80 and the additional graph featureshere the las scores for chinese czech and german are higher than the best results on the conll 2009 data sets and the score for english is highly competitivefor chinese we achieve 7851 las which is 15 percentage points higher than the reference score while the pos score is 054 higher than our baselinefor czech we get 8373 las which is by far the highest score reported for this data set together with stateoftheart pos accuracyfor german we obtain 8905 las and 9778 pos which in both cases is substantially better than in the conll shared taskwe believe it is also the highest pos accuracy ever reported for a taggerparser trained only on the tiger treebankrow 6 finally presents results with added cluster features for english and german which results in additional improvements in all metricstable 3 gives the results for the penn treebank converted with the headfinding rules of yamada and matsumoto and the labeling rules of nivre we use k 3 and α 04 which gave the best results on the development setthe uas improves by 024 when we do joint tagging and parsingthe pos accuracy improves slightly by 012 but to a lower degree than for the english conll data where we observed an improvement of 020nonetheless the improvement in the joint tlas score is statistically significant at p 001 our joint tagger and dependency parser with graph features gives very competitive unlabeled dependency scores for english with 9338 uasto the best of our knowledge this is the highest score reported for a dependency parser that does not use additional information sourcesby adding cluster features and widening the beam to bi 80 we achieve 9367 uaswe also obtain a pos accuracy of 9742 which is on a par with the best results obtained using semisupervised taggers table 4 shows the results for the chinese penn treebank ctb 51 together with related workin experiments with the development set we could confirm the results from the chinese conll data set and obtained the best results with the same settings with bi 40 uas improves by 025 and pos by 030 and the tlas improvement is again highly significant we get the highest uas 8142 with a beam of 80 and added graph features in which case pos accuracy increases from 9281 to 9324since our tagger was not optimized for chinese we have lower baseline results for the tagger than both li et al and hatori et al but still manage to achieve the highest reported uasthe speed of the joint tagger and dependency parser is quite reasonable with about 04 seconds per sentence on the wsjptb test set given that we perform tagging and labeled parsing with a beam of 80 while incorporating the features of a thirdorder graphbased modelexperiments were performed on a computer with an intel i73960x cpu these performance values are preliminary since we are still working on the speedup of the parserin order to better understand the benefits of the joint model we performed an error analysis for german parts of speech in german with fscores for the lefthandside categoryadj adjective adv adverb art determiner appr preposition ne proper noun nn common noun prels relative pronoun vvfin finite verb vvinf nonfinite verb vafin finite auxiliary verb vainf nonfinite auxiliary verb vvpp participle xy not a wordwe use α to denote the set of categories with α as a prefix and english where we compared the baseline and the joint model with respect to fscores for individual partofspeech categories and dependency labelsfor the partofspeech categories we found an improvement across the board for both languages with no category having a significant decrease in fscore but we also found some interesting patterns for categories that improved more than the averagetable 5 shows selected entries from the confusion matrix for german where we see substantial improvements for finite and nonfinite verbs which are often morphologically ambiguous but which can be disambiguated using syntactic contextwe also see improved accuracies for common and proper nouns which are both capitalized in standard german orthography and therefore often mistagged and for relative pronouns which are less often confused for determiners in the joint modeltable 6 gives a similar snapshot for english and we again see improvements for verb categories that are often morphologically ambiguous such as past participles which can be confused for past tense verbs and present tense verbs in third person singular which can be confused for nounswe also see some improvement for the singular noun categoparts of speech in english with fscores for the lefthandside categorydt determiner in preposition or subordinating conjunction jj adjective jjr comparative adjective nn singular or mass noun nns plural noun pos possessive clitic rb adverb rbr comparative adverb rp particle uh interjection vb base form verb vbd past tense verb vbg gerund or present participle vbn past participle vbp present tense verb not 3rd person singular vbz present tense verb 3rd person singularwe use α to denote the set of categories with α as a prefix ry and for adverbs which are less often confused for prepositions or subordinating conjunctions thanks to the syntactic information in the joint modelfor dependency labels it is hard to extract any striking patterns and it seems that we mainly see an improvement in overall parsing accuracy thanks to less severe tagging errorshowever it is worth observing that for both english and german we see significant fscore improvements for the core grammatical functions subject and object our work is most closely related to lee et al li et al and hatori et al who all present discriminative models for joint tagging and dependency parsinghowever all three models only perform unlabeled parsing while our model incorporates dependency labels into the parsing processwhereas lee et al and li et al take a graphbased approach to dependency parsing hatori et al use a transitionbased model similar to ours but limited to projective dependency treesboth li et al and hatori et al only evaluate their model on chinese and of these only hatori et al report consistent improvements in both tagging and parsing accuracylike our system the parser of lee et al can handle nonprojective trees and experimental results are presented for four languages but their graphbased model is relatively simple and the baselines therefore well below the state of the artwe are thus the first to show consistent improvements in both tagging and parsing accuracy across typologically diverse languages at the stateoftheart levelmoreover the capacity to handle nonprojective dependencies which is crucial to attain good performance on czech and german does not seem to hurt performance on english and chinese where the benchmark sets contain only projective treesthe use of beam search in transitionbased dependency parsing in order to mitigate the problem of error propagation was first proposed by johansson and nugues although they still used a locally trained modelglobally normalized models were first explored by titov and henderson who were also the first to use a parameterized shift transition like the one found in both hatori et al and our own work although titov and henderson used it to define a generative model by parameterizing the shift transition by an input wordzhang and clark was the first to combine beam search with a globally normalized discriminative model using structured perceptron learning and the early update strategy of collins and roark and also explored the addition of graphbased features to a transitionbased parserthis approach was further pursued in zhang and clark and was used by zhang and nivre to achieve stateoftheart results in dependency parsing for both chinese and english through the addition of rich nonlocal featureshuang and sagae combined structured perceptron learning and beam search with the use of a graphstructured stack to allow ambiguity packing in the beam a technique that was reused by hatori et al finally as noted in the introduction although joint tagging and parsing is rare in dependency parsing most stateoftheart parsers based on pcfg models naturally incorporate partofspeech tagging and usually achieve better parsing accuracy with a joint model than with a pipeline approach models that in addition incorporate morphological analysis and segmentation have been explored by tsarfaty cohen and smith and goldberg and tsarfaty with special reference to hebrew parsingwe have presented the first system for joint partofspeech tagging and labeled dependency parsing with nonprojective dependency treesevaluation on four languages shows consistent improvements in both tagging and parsing accuracy over a pipeline system with stateoftheart results across the boardthe error analysis reveals improvements in tagging accuracy for syntactically central categories mainly verbs with improvement in syntactic accuracy for core grammatical functions as a resultin future work we intend to explore joint models that incorporate not only basic partofspeech tags but also more finegrained morphological features
D12-1133
a transitionbased system for joint partofspeech tagging and labeled nonprojective dependency parsingmost current dependency parsers presuppose that input words have been morphologically disambiguated using a partofspeech tagger before parsing beginswe present a transitionbased system for joint partofspeech tagging and labeled dependency parsing with nonprojective treesexperimental evaluation on chinese czech english and german shows consistent improvements in both tagging and parsing accuracy when compared to a pipeline system which lead to improved stateoftheart results for all languageswe introduce a transitionbased system that jointly performed pos tagging and dependency parsing
an efficient implementation of a new dop model two apparently opposing dop models exist in the literature one which computes the parse tree involving the most frequent subtrees from a treebank and one which computes the parse tree involving the fewest subtrees from a treebank this paper proposes an integration of the two models which outperforms each of them separately together with a pcfgreduction of dop we obtain improved accuracy and efficiency on the wall street journal treebank our results show an 11 relative reduction in error rate over previous models and an average processing time of 36 seconds per wsj sentence the distinctive feature of the dop approach when it was proposed in 1992 was to model sentence structures on the basis of previously observed frequencies of sentence structure fragments without imposing any constraints on the size of these fragmentsfragments include for instance subtrees of depth 1 as well as entire treesto appreciate these innovations it should be noted that the model was radically different from all other statistical parsing models at the timeother models started off with a predefined grammar and used a corpus only for estimating the rule probabilities the dop model on the other hand was the first model that proposed not to train a predefined grammar on a corpus but to directly use corpus fragments as a grammarthis approach has now gained wide usage as exemplified by the work of collins charniak johnson chiang and many othersthe other innovation of dop was to take all corpus fragments of any size rather than a small subsetthis innovation has not become generally adopted yet many approaches still work either with local trees ie single level rules with limited means of information percolation or with restricted fragments as in stochastic treeadjoining grammar that do not include nonlexicalized fragmentshowever during the last few years we can observe a shift towards using more and larger corpus fragments with fewer restrictionswhile the models of collins and eisner restricted the fragments to the locality of headwords later models showed the importance of including context from higher nodes in the tree the importance of including nonheadwords has become uncontroversial and collins argues for quotkeeping track of counts of arbitrary fragments within parse treesquot which has indeed been carried out in collins and duffy who use exactly the same set of tree fragments as proposed in bod thus the major innovations of dop are 2 the use of arbitrarily large fragments rather than restricted ones both have gained or are gaining wide usage and are also becoming relevant for theoretical linguistics one instantiation of dop which has received considerable interest is the model known as dop12 dop1 combines subtrees from a treebank by means of nodesubstitution and computes the probability of a tree from the normalized frequencies of the subtrees bod showed how standard parsing techniques can be applied to dop1 by converting subtrees into ruleshowever the problem of computing the most probable parse turns out to be nphard mainly because the same parse tree can be generated by exponentially many derivationsmany implementations of dop1 therefore estimate the most probable parse by monte carlo techniques or by viterbi nbest search or by restricting the set of subtrees simaan gave an efficient algorithm for computing the parse tree generated by the most probable derivation which in some cases is a reasonable approximation of the most probable parsegoodman developed a polynomial time pcfgreduction of dop1 whose size is linear in the size of the training set thus converting the exponential number of subtrees to a compact grammarwhile goodman method does still not allow for an efficient computation of the most probable parse in dop1 it does efficiently compute the quotmaximum constituents parsequot ie the parse tree which is most likely to have the largest number of correct constituentsjohnson showed that dop1 subtree estimation method is statistically biased and inconsistentbod solved this problem by training the subtree probabilities by a maximum likelihood procedure based on expectationmaximizationthis resulted in a statistically consistent model dubbed mldophowever mldop suffers from overlearning if the subtrees are trained on the same treebank trees as they are derived fromcrossvalidation is needed to avoid this problembut even with crossvalidation mldop is outperformed by the much simpler dop1 model on both the atis and ovis treebanks bonnema et al observed that another problem with dop1 subtreeestimation method is that it provides more probability to nodes with more subtrees and therefore more probability to larger subtreesas an alternative bonnema et al propose a subtree estimator which reduces the probability of a tree by a factor of two for each nonroot nonterminal it containsbod used an alternative technique which samples a fixed number of subtrees of each depth and which has the effect of assigning roughly equal weight to each node in the training dataalthough bod method obtains very competitive results on the wall street journal task the parsing time was reported to be over 200 seconds per sentence collins duffy showed how the perceptron algorithm can be used to efficiently compute the best parse with dop1 subtrees reporting a 51 relative reduction in error rate over the model in collins on the wsjgoodman furthermore showed how bonnema et al and bod estimators can be incorporated in his pcfgreduction but did not report any experiments with these reductionsthis paper presents the first published results with goodman pcfgreductions of both bonnema et al and bod estimators on the wsjwe show that these pcfgreductions result in a 60 times speedup in processing time wrtbod but while bod estimator obtains stateoftheart results on the wsj comparable to charniak and collins bonnema et al estimator performs worse and is comparable to collins in the second part of this paper we extend our experiments with a new notion of the best parse treemost previous notions of best parse tree in dop1 were based on a probabilistic metric with bod as a notable exception who used a simplicity metric based on the shortest derivationwe show that a combination of a probabilistic and a simplicity metric which chooses the simplest parse from the n likeliest parses outperforms the use of these metrics alonecompared to bod our results show an 11 improvement in terms of relative error reduction and a speedup which reduces the processing time from 220 to 36 seconds per wsj sentencedop1 parses new input by combining treebanksubtrees by means of a leftmost nodesubsitution operation indicated as 0the probability of a parse tree is computed from the occurrencefrequencies of the subtrees in the treebankthat is the probability of a subtree t is taken as the number of occurrences of t in the training set i t i divided by the total number of occurrences of all subtrees t with the same root label as t let r return the root label of t the probability of a derivation ti00tn is computed by the product of the probabilities of its subtrees ti hp an important feature of dop1 is that there may be several derivations that generate the same parse treethe probability of a parse tree t is the sum of the probabilities of its distinct derivationslet tid be the ith subtree in the derivation d that produces tree t then the probability of t is given by thus dop1 considers counts of subtrees of a wide range of sizes in computing the probability of a tree everything from counts of singlelevel rules to counts of entire treesa disadvantage of this model is that an extremely large number of subtrees must be taken into accountfortunately there exists a compact pcfgreduction of dop1 that generates the same trees with the same probabilities as shown by goodman here we will only sketch this pcfgreduction which is heavily based on goodman goodman assigns every node in every tree a unique number which is called its addressthe notation ak denotes the node at address k where a is the nonterminal labeling that nodea new nonterminal is created for each node in the training datathis nonterminal is called a k nonterminals of this form are called quotinteriorquot nonterminals while the original nonterminals in the parse trees are called quotexteriorquot nontermimalslet aj represent the number of subtrees headed by the node ajlet a represent the number of subtrees headed by nodes with nonterminal a that is a ej ajgoodman further illustrates this by a node a j of the following form to see how many subtrees it has goodman first considers the possibilities of the left branchthere are bk nontrivial subtrees headed by bk and there is also the trivial case where the left node is simply bthus there are bk 1 different possibilities on the left branchsimilarly there are ci 1 possibilities on the right branchwe can create a subtree by choosing any possible left subtree and any possible right subtreethus there are aj possible subtrees headed by a j goodman then gives a simple small pcfg with the following property for every subtree in the training corpus headed by a the grammar will generate an isomorphic subderivation with probability 1athus rather than using the large explicit dop1 model one can also use this small pcfg that generates isomorphic derivations with identical probabilitiesgoodman construction is as followsfor the node in figure 1 the following eight pcfg rules are generated where the number in parentheses following a rule is its probabilitygoodman then shows by simple induction that subderivations headed by a with external nonterminals at the roots and leaves internal nonterminals elsewhere have probability 1aand subderivations headed by a1 with external nonterminals only at the leaves internal nonterminals elsewhere have probability 1a1 goodman main theorem is that this construction produces pcfg derivations isomorphic to dop derivations with equal probabilitythis means that summing up over derivations of a tree in dop yields the same probability as summing over all the isomorphic derivations in the pcfgnote that goodman reduction method does still not allow for an efficient computation of the most probable parse tree of a sentence there may still be exponentially many derivations generating the same treebut goodman shows that with his pcfgreduction he can efficiently compute the aforementioned maximum constituents parsemoreover goodman pcfg reduction may also be used to estimate the most probable parse by viterbi nbest search which computes the n most likely derivations and then sums up the probabilities of the derivations producing the same treewhile bod needed to use a very large sample from the wsj subtrees to do this goodman method can do the same job with a more compact grammardop1 has a serious bias its subtree estimator provides more probability to nodes with more subtrees the amount of probability given to two different training nodes depends on how many subtrees they have and given that the number of subtrees is an exponential function this means that some training nodes could easily get hundreds or thousands of times the weight of others even if both occur exactly oncebonnema et al show that as a consequence too much weight is given to larger subtrees and that the parse accuracy of dop1 deteriorates if large subtrees are includedalthough this property may not be very harmful for small corpora with relatively small trees such as the atis bonnema et al give evidence that it leads to severe biases for larger corpora such as the wsjthere are several ways to fix this problemfor example bod samples a fixed number of subtrees of each depth which has the effect of assigning roughly equal weight to each node in the training data and roughly exponentially less probability for larger trees bod reports stateoftheart results with this method and observes no decrease in parse accuracy when larger subtrees are included yet his grammar contains more than 5 million subtrees and processing times of over 200 seconds per wsj sentence are reported in this paper we will test a simple extension of goodman compact pcfgreduction of dop which has the same property as the normalization proposed in bod in that it assigns roughly equal weight to each node in the training datalet a be the number of times nonterminals of type a occur in the training datathen we slightly modify the pcfgreduction in figure 2 as follows we will also test the proposal by bonnema et al which reduces the probability of a subtree by a factor of two for each nonroot nonterminal it containsit easy to see that this is equivalent to reducing the probability of a tree by a factor of four for each pair of nonterminals it contains resulting in the pcfg reduction in figure 4tested on the ovis corpus bonnema et al proposal obtains results that are comparable to simaan see bonnema et althis paper presents the first published results with this estimator on the wsjby using these pcfgreductions we can thus parse with all subtrees in polynomial timehowever as mentioned above efficient parsing does not necessarily mean efficient disambiguation the exact computation of the most probable parse remains exponentialin this paper we will estimate the most probable parse by computing the 10000 most probable derivations by means of viterbi nbest from which the most likely parse is estimated by summing up the probabilities of the derivations that generate the same parsemost dop models such as in bod goodman bonnema et al simaan and collins duffy use a likelihood criterion in defining the best parse tree they take the most likely tree as a candidate for the best tree of a sentencewe will refer to these models as likelihooddop models but in this paper we will specifically mean by quotlikelihooddopquot the pcfgreduction of bod given in section 22in bod an alternative notion for the best parse tree was proposed based on a simplicity criterion instead of producing the most probable tree this model produced the tree generated by the shortest derivation with the fewest training subtreeswe will refer to this model as simplicitydopin case the shortest derivation is not unique bod proposes to back off to a frequency ordering of the subtreesthat is all subtrees of each root label are assigned a rank according to their frequency in the treebank the most frequent subtree of each root label gets rank 1 the second most frequent subtree gets rank 2 etcnext the rank of each derivation is computed as the sum of the ranks of the subtrees involvedthe derivation with the smallest sum or highest rank is taken as the final best derivation producing the best parse tree in simplicitydop3 although bod reports that simplicity dop is outperformed by likelihooddop its results are still rather impressive for such a simple modelwhat is more important is that the best parse trees predicted by simplicitydop are quite different from the best parse trees predicted by likelihooddopthis suggests that a model which combines these two notions of best parse may boost the accuracythe underlying idea of combining likelihooddop and simplicitydop is that the parser selects the simplest tree from among the n most probable trees where n is a free parametera straightforward alternative would be to select the most probable tree from among the n simplest treeswe will refer to the first combination as simplicitylikelihooddop or sldop and to the second combination as likelihoodsimplicitydop or lsdopnote that for n1 sldop is equal to likelihooddop since there is only one most probable tree to select from and lsdop is equal to simplicitydop since there is only one simplest tree to select frommoreover if n gets large sldop converges to simplicitydop while lsdop converges to likelihooddopby varying the parameter n we will be able to compare likelihooddop simplicitydop and several instantiations of sldop and lsdopnote that goodman pcfgreduction method summarized in section 2 applies not only to likelihooddop but also to simplicitydopthe only thing that needs to be changed for simplicitydop is that all subtrees should be assigned equal probabilitiesthen the shortest derivation is equal to the most probable derivation and can be computed by standard viterbi optimization which can be seen as follows if each subtree has a probability p then the probability of a derivation involving n subtrees is equal to pn and since 0p1 the derivation with the fewest subtrees has the greatest probabilityfor sldop and lsdop we first compute either n likeliest or n simplest trees by means of viterbi optimizationnext we either select the simplest tree among the n likeliest ones or the likeliest tree among the n simplest ones in our experiments n will never be larger than 1000for our experiments we used the standard division of the wsj with sections 2 through 21 for training and section 23 for testing section 22 was used as development setas usual all trees were stripped off their semantic tags coreference information and quotation markswithout loss of generality all trees were converted to binary branching we employed the same unknown word model as in bod based on statistics on wordendings hyphenation and capitalization in combination with goodturing we used quotevalbquot4 to compute the standard parseval scores for our results we focused on the labeled precision and labeled recall scores as these are commonly used to rank parsing systemsour first experimental goal was to compare the two pcfgreductions in section 22 which we will refer to resp as bod01 and bon99table 1 gives the results of these experiments and compares them with some other statistical parsers while the pcfg reduction of bod obtains stateoftheart results on the wsj comparable to charniak bonnema et al estimator performs worse and is comparable to collins as to the processing time the pcfg reduction parses each sentence 100 words in 36 seconds average while the parser in bod which uses over 5 million subtrees is reported to take about 220 seconds per sentencethis corresponds to a speedup of over 60 timesit should be mentioned that the best precision and recall scores reported in bod are slightly better than the ones reported here this may be explained by the fact our best results in bod were obtained by testing various subtree restrictions until the highest accuracy was obtained while in the current experiment we used all subtrees as given by the pcfgreductionin the following section first results of sldop and lsdop with a compact pcfgreduction we will see that our new definition of best parse tree also outperforms the best results obtained in bod as our second experimental goal we compared the models sldop and lsdop explained in section 32recall that for n1 sldop is equal to the pcfgreduction of bod while lsdop is equal to simplicitydoptable 2 shows the results for sentences 100 words for various values of n note that there is an increase in accuracy for both sldop and lsdop if the value of n increases from 1 to 12but while the accuracy of sldop decreases after n14 and converges to simplicity dop the accuracy of lsdop continues to increase and converges to likelihooddopthe highest accuracy is obtained by sldop at 12 n 14 an lp of 908 and an lr of 907this is roughly an 11 relative reduction in error rate over charniak and bods pcfgreduction reported in table 1compared to the reranking technique in collins who obtained an lp of 899 and an lr of 896 our results show a 9 relative error rate reductionwhile sldop and lsdop have been compared before in bod especially in the context of musical parsing this paper presents the the dop approach is based on two distinctive features the use of corpus fragments rather than grammar rules and the use of arbitrarily large fragments rather than restricted oneswhile the first feature has been generally adopted in statistical nlp the second feature has for a long time been a serious bottleneck as it results in exponential processing time when the most probable parse tree is computedthis paper showed that a pcfgreduction of dop in combination with a new notion of the best parse tree results in fast processing times and very competitive accuracy on the wall street journal treebankthis paper also reaffirmed that the coarsegrained approach of using all subtrees from a treebank outperforms the finegrained approach of specifically modeling lexicalsyntactic depen dencies
E03-1005
an efficient implementation of a new dop modeltwo apparently opposing dop models exist in the literature one which computes the parse tree involving the most frequent subtrees from a treebank and one which computes the parse tree involving the fewest subtrees from a treebankthis paper proposes an integration of the two models which outperforms each of them separatelytogether with a pcfgreduction of dop we obtain improved accuracy and efficiency on the wall street journal treebankour results show an 11 relative reduction in error rate over previous models and an average processing time of 36 seconds per wsj sentencewe note that it is the highest ranking parse not derivation that is desiredwe show that dop models that select the preferred parse of a test sentence using the shortest derivation criterion perform very wellwe redress subtree probabilit by a simple correction factor
bootstrapping statistical parsers from small datasets we present a practical cotraining method for bootstrapping statistical parsers using a small amount of manually parsed training material and a much larger pool of raw sentences experimental results show that unlabelled sentences can be used to improve the performance of statistical parsers in addition we consider the problem of bootstrapping parsers when the manually parsed training material is in a different domain to either the raw sentences or the testing material we show that bootstrapping continues to be useful even though no manually produced parses from the target domain are used in this paper we describe how cotraining can be used to bootstrap a pair of statistical parsers from a small amount of annotated training datacotraining is a wealdy supervised learning algorithm in which two learners are iteratively retrained on each other outputit has been applied to problems such as wordsense disambiguation webpage classification and namedentity recognition however these tasks typically involved a small set of labels and a relatively small parameter spaceit is therefore instructive to consider cotraining for more complex modelscompared to these earlier models a statistical parser has a larger parameter space and instead of class labels it produces recursively built parse trees as outputprevious work in cotraining statistical parsers used two components of a single parsing framework in contrast this paper considers cotraining two diverse statistical parsers the collins lexicalized pcfg parser and a lexicalized tree adjoining grammar parsersection 2 reviews cotraining theorysection 3 considers how cotraining applied to training statistical parsers can be made computationally viablein section 4 we show that cotraining outperforms selftraining and that cotraining is most beneficial when the seed set of manually created parses is smallsection 44 shows that cotraining is possible even when the set of initially labelled data is drawn from a different distribution to either the unlabelled training material or the test set that is we show that cotraining can help in porting a parser from one genre to anotherfinally section 5 reports summary results of our experimentscotraining can be informally described in the following manner effectively by picking confidently labelled data from each model to add to the training data one model is labelling data for the otherthis is in contrast to selftraining in which a model is retrained only on the labelled examples that it produces blum and mitchell prove that when the two views are conditionally independent given the label and each view is sufficient for learning the task cotraining can improve an initial weak learner using unlabelled datadasgupta et al extend the theory of cotraining by showing that by maximising their agreement over the unlabelled data the two learners make few generalisation errors abney argues that this assumption is extremely restrictive and typically violated in the data and he proposes a weaker independence assumptionabney also presents a greedy algorithm that maximises agreement on unlabelled datagoldman and zhou show that through careful selection of newly labelled examples cotraining can work even when the classifiers views do not fully satisfy the independence assumptionto apply the theory of cotraining to parsing we need to ensure that each parser is capable of learning the parsing task alone and that the two parsers have different viewswe could also attempt to maximise the agreement of the two parsers over unlabelled data using a similar approach to that given by abneythis would be computationally very expensive for parsers however and we therefore propose some practical heuristics for determining which labelled examples to add to the training set for each parserour approach is to decompose the problem into two stepsfirst each parser assigns a score for every unlabelled sentence it parsed according to some scoring function f estimating the reliability of the label it assigned to the sentence note that the scoring functions used by the two parsers do not necessarily have to be the samenext a selection method decides which parser is retrained upon which newly parsed sentencesboth scoring and selection phases are controlled by a simple incremental algorithm which is detailed in section 32an ideal scoring function would tell us the true accuracy rates of the trees that the parser producedin practice we rely on computable scoring functions that approximate the true accuracy scores such as measures of uncertaintyin this paper we use the probability of the most likely parse as the scoring function where w is the sentence and v is the set of parses produced by the parser for the sentencescoring parses using parse probability is motivated by the idea that parse probability should increase with parse correctnessduring the selection phase we pick a subset of the newly labelled sentences to add to the training sets of both parsersthat is a subset of those sentences labelled by the ltag parser is added to the training set of the collins pcfg parser and vice versait is important to find examples that are reliably labelled by the teacher as training data for the studentthe term teacher refers to the parser providing data and student to the parser receiving a and b are two different parsersma and ivrib are models of a and b at step i you is a large pool of unlabelled sentencesui is a small cache holding subset of you at step i l is the manually labelled seed datala and lib are the labelled training examples for a and b at step i and assign scores to them according to their scoring functions ja and fbselect new parses pa and pb according to some selection method s which uses the scores from fa and fblia1 is lia augmented with pb l1 is lib augmented with pa datain the cotraining process the two parsers alternate between teacher and studentwe use a method which builds on this idea stopn which chooses those sentences that belong to the teacher nhighest scored sentencesfor this paper we have used a simple scoring function and selection method but there are alternativesother possible scoring functions include a normalized version of fprob which does not penalize longer sentences and a scoring function based on the entropy of the probability distribution over all parses returned by the parserother possible selection methods include selecting examples that one parser scored highly and another parser scored lowly and methods based on disagreements on the label between the two parsersthese methods build on the idea that the newly labelled data should not only be reliably labelled by the teacher but also be as useful as possible for the studentthe pseudocode for the cotraining process is given in figure 1 and consists of two different parsers and a central control that interfaces between the two parsers and the dataat each cotraining iteration a small set of sentences is drawn from a large pool of unlabelled sentences and stored in a cacheboth parsers then attempt to parse every sentence in the cachenext a subset of the sentences newly labelled by one parser is added to the training data of the other parser and vice versathe general control flow of our system is similar to the algorithm described by blum and mitchell however there are some differences in our treatment of the training datafirst the cache is flushed at each iteration instead of only replacing just those sentences moved from the cache the entire cache is refilled with new sentencesthis aims to ensure that the distribution of sentences in the cache is representative of the entire pool and also reduces the possibility of forcing the central control to select training examples from an entire set of unreliably labelled sentencessecond we do not require the two parsers to have the same training setsthis allows us to explore several selection schemes in addition to the one proposed by blum and mitchellin order to conduct cotraining experiments between statistical parsers it was necessary to choose two parsers that generate comparable output but use different statistical modelswe therefore chose the following parsersparser model 2some code for training this parser was added to make the cotraining experiments possiblewe refer to this parser as collinscfgin order to perform the cotraining experiments reported in this paper ltag derivation events collinscfg ltag bilexical dependencies are between bilexical dependencies are between lexicalized nonterminals elementary trees can produce novel elementary can produce novel hilexical trees for the ltag parser dependencies for collinscfg when using small amounts of seed data when using small amounts of seed data abstains less often than ltag abstains more often than collinscfg were extracted from the headlexicalized parse tree output produced by the collinscfg parserthese events were used to retrain the statistical model used in the ltag parserthe output of the ltag parser was also modified in order to provide input for the retraining phase in the collinscfg parserthese steps ensured that the output of the collinscfg parser could be used as new labelled data to retrain the ltag parser and vice versathe domains over which the two models operate are quite distinctthe ltag model uses tree fragments of the final parse tree and combines them together while the collinscfg model operates on a much smaller domain of individual lexicalized nonterminalsthis provides a mechanism to bootstrap information between these two models when they are applied to unlabelled dataltag can provide a larger domain over which hilexical information is defined due to the arbitrary depth of the elementary trees it uses and hence can provide novel lexical relationships for the collinscfg model while the collinscfg model can paste together novel elementary trees for the ltag modela summary of the differences between the two models is given in figure 2 which provides an informal argument for why the two parsers provide contrastive views for the cotraining experimentsof course there is still the question of whether the two parsers really are independent enough for effective cotraining to be possible in the results section we show that the collinscfg parser is able to learn useful information from the output of the ltag parserfigure 3 shows how the performance of the collinscfg parser varies as the amount of manually annotated training data penn treebank is increasedthe graph shows a rapid growth in accuracy which tails off as increasing amounts of training data are addedthe learning curve shows that the maximum payoff from cotraining is likely to occur between 500 and 1000 sentencestherefore we used two sizes of seed data 500 and 1000 sentences to see if cotraining could improve parser performance using these small amounts of labelled seed datafor reference figure 4 shows a similar curve for the ltag parsereach parser was first initialized with some labelled seed data from the standard training split of the wsj penn treebankevaluation was in terms of parseval using a balanced fscore over labelled constituents from section 0 of the treebanki the fscore values are reported for each iteration of cotraining on the development set since we need to parse all sentences in section 0 at each iteration in the experiments reported in this paper we only evaluated one of the parsers the collinscfg parser at each iterationall results we mention are fscores for the collinscfg parserselftraining experiments were conducted in which each parser was retrained on its own outputselftraining provides a useful comparison with cotraining because any difference in the results indicates how much the parsers are benefiting from being trained on the output of another parserthis experiment also gives us some insight into the differences between the two parsing modelsselftraining was used by charniak where a modest gain was reported after retraining his parser on 30 million wordsthe results are shown in figure 5here both parsers were initialised with the first 500 sentences from the standard training split of the wsj penn treebanksubsequent unlabelled sentences were also drawn from this splitduring each round of selftraining 30 sentences were parsed by each parser and each parser was retrained upon the 20 selflabelled sentences which it scored most highly as the scorethe results vary significantly between the collinscfg and the ltag parser which lends weight to the argument that the two parsers are largely independent of each otherit also shows that at least for the collinscfg model a minor improvement in performance can be had from selftrainingthe ltag parser by contrast is hurt by selftraining the first cotraining experiment used the first 500 sentences from sections 221 of the treebank as seed data and subsequent unlabelled sentences were drawn from the remainder of these sectionsduring each cotraining round the ltag parser parsed 30 sentences and the 20 labelled sentences with the highest scores were added to the training data of the collinscfg parserthe training data of the ltag parser was augmented in the same way using the 20 highest scoring parses from the set of 30 but using the collinscfg parser to label the sentences and provide the joint probability for scoringfigure 6 gives the results for the collinscfg parser and also shows the selftraining curve for the upper curve is for cotraining between collinscfg and ltag the lower curve is selftraining for collinscfg comparison2 the graph shows that cotraining results in higher performance than selftrainingthe graph also shows that cotraining performance levels out after around 80 rounds and then starts to degradethe likely reason for this dip is noise in the parse trees added by cotrainingpierce and cardie noted a similar behaviour when they cotrained shallow parsers upper curve is for 1000 sentences labelled data from brown plus 100 wsj sentences the lower curve only uses 1000 sentences from brownthe second cotraining experiment was the same as the first except that more seed data was used the first 1000 sentences from sections 221 of the treebankfigure 7 gives the results and for comparison also shows the previous performance curve for the 500 seed set experimentthe key observation is that the benefit of cotraining is greater when the amount of seed material is smallour hypothesis is that when there is a paucity of initial seed data coverage is a major obstacle that cotraining can addressas the amount of seed data increases coverage becomes less of a problem and the cotraining advantage is diminishedthis means that when most sentences in the testing set can be parsed subsequent changes in performance come from better parameter estimatesalthough cotraining boosts the performance of the parser using the 500 seed sentences from 75 to 778 it does not achieve the level of performance of a parser trained on 1000 seed sentencessome possible explanations are that the newly labelled sentences are not reliable that the sentences deemed reliable are not informative training examples or a combination of both factorsthis experiment examines whether cotraining can be used to boost performance when the unlabelled data are taken from a different source than the initial seed dataprevious experiments in gildea have shown that porting a statistical parser from a source genre to a target genre is a nontrivial taskour two different sources were the parsed section of the brown corpus and the penn treebank wsjunlike the wsj the brown corpus does not contain newswire material and so the two sources differ from each other in terms of vocabulary and syntactic constructs1000 annotated sentences from the brown section of the penn treebank were used as the seed datacotraining then proceeds using the wsj3 note that no manually created parses in the wsj domain are used by the parser even though it is evaluated using wsj materialin figure 8 the lower curve shows performance for the collinscfg parser the difference in corpus domain does not hinder cotrainingthe parser performance is boosted from 75 to 773note that most of the improvement is within the first 5 iterationsthis suggests that the parsing model may be adapting to the vocabulary of the new domainwe also conducted an experiment in which the initial seed data was supplemented with a tiny amount of annotated data from the domain of the unlabelled datathis experiment simulates the situation where there is only a very limited amount of labelled material in the novel domainthe upper curve in figure 8 shows the outcome of this experimentnot surprisingly the 100 additional labelled wsj sentences improved the initial performance of the parser while the amount of improvement in performance is less than the previous case cotraining provides an additional boost to the parsing performance to 787the various experiments are summarised in table 1as is customary in the statistical parsing literature we view all our previous experiments using section 0 of the penn treebank wsj as contributing towards developmenthere we report on system performance on unseen material we give fscore results for the collinscfg parser before and after cotraining for section 23the results show a modest improvement under each cotraining scenario indicating that for the collinscfg parser there is useful information to be had from the output of the ltag parserhowever the results are not as dramatic as those reported in other cotraining papers such as blum and mitchell for webpage classification and collins and singer for namedentity recognitiona possible reason is that parsing is a much harder task than these problemsan open question is whether cotraining can produce results that improve upon the stateoftheart in statistical parsinginvestigation of the convergence curves as the parsers are trained upon more and more manuallycreated treebank material suggests that with the penn treebank the collinscfg parser has nearly converged alreadygiven 40000 sentences of labelled data we can obtain a projected value of how much performance can be improved with additional reliably labelled datathis projected value was obtained by fitting a curve to the observed convergence results using a leastsquares method from mat labwhen training data is projected to a size of 400k manually created treebank sentences the performance of the collinscfg parser is projected to be 892 with an absolute upper bound of 893this suggests that there is very little room for performance improvement for the collinscfg parser by simply adding more labelled data however models whose parameters have not already converged might benefit from cotraining for instance when training data is projected to a size of 400k manually created treebank sentences the performance of the ltag statistical parser would be 904 with an absolute upper bound of 916thus a bootstrapping method might improve performance of the ltag statistical parser beyond the current stateoftheart performance on the treebankin this paper we presented an experimental study in which a pair of statistical parsers were trained on labelled and unlabelled data using cotraining our results showed that simple heuristic methods for choosing which newly parsed sentences to add to the training data can be beneficialwe saw that cotraining outperformed selftraining that it was most beneficial when the seed set was small and that cotraining was possible even when the seed material was from another distribution to both the unlabelled material or the testing setthis final result is significant as it bears upon the general problem of having to build models when little or no labelled training material is available for some new domaincotraining performance may improve if we consider cotraining using subparsesthis is because a parse tree is really a large collection of individual decisions and retraining upon an entire tree means committing to all such decisionsour ongoing work is addressing this point largely in terms of reranked parsersfinally future work will also track comparative performance between the ltag and collinscfg modelsthis work has been supported in part by the nsfdarpa funded 2002 language engineering workshop at johns hopkins universitywe would like to thank michael collins andrew mccallum and fernando pereira for helpful discussions and the reviewers for their comments on this paper
E03-1008
bootstrapping statistical parsers from small datasetswe present a practical cotraining method for bootstrapping statistical parsers using a small amount of manually parsed training material and a much larger pool of raw sentencesexperimental results show that unlabelled sentences can be used to improve the performance of statistical parsersin addition we consider the problem of bootstrapping parsers when the manually parsed training material is in a different domain to either the raw sentences or the testing materialwe show that bootstrapping continues to be useful even though no manually produced parses from the target domain are usedwe examine selftraining for pcfg parsing in the small seed case we report either minor improvements or significant damage from using selftraining for parsingwe find degradation using a lexicalized tree adjoining grammar parser and minor improvement using collins lexicalized pcfg parser however this gain was obtained only when the parser was trained on a small labeled set
combining distributional and morphological information for part of speech induction in this paper we discuss algorithms for clustering words into classes from unlabelled text using unsupervised algorithms based on distributional and morphological information we show how the use of morphological information can improve the performance on rare words and that this is robust across a wide range of languages the task studied in this paper is the unsupervised learning of partsofspeech that is to say lexical categories corresponding to traditional notions of for example nouns and verbsas is often the case in machine learning of natural language there are two parallel motivations first a simple engineering one the induction of these categories can help in smoothing and generalising other models particularly in language modelling for speech recognition as explored by and secondly a cognitive science motivation exploring how evidence in the primary linguistic data can account for first language acquisition by infant children at this early phase of learning only limited sources of information can be used primarily distributional evidence about the contexts in which words occur and morphological evidence about the sequence of symbols of which each word is formeda number of different approaches have been presented for this task using exclusively distributional evidence to cluster the words together starting with and these have been shown to produce good results in english japanese and chinesethese languages have however rather simple morphology and thus words will tend to have higher frequency than in more morphologically complex languagesin this paper we will address two issues first whether the existing algorithms work adequately on a range of languages and secondly how we can incorporate morphological informationwe are particularly interested in rare words as points out it is most important to cluster the infrequent words as we will have reliable information about the frequent words and yet it is these words that are most difficult to clusterwe accordingly focus both in our algorithms and our evaluation on how to cluster words effectively that occur only a few times in the training datain addition we are interested primarily in inducing small numbers of clusters from comparatively small amounts of data using limited or no sources of external knowledge and in approaches that will work across a wide range of languages rather than inducing large numbers from hundreds of millions of wordsnote this is different from the common task of guessing the word category of an unknown word given a preexisting set of partsofspeech a task which has been studied extensively our approach will be to incorporate morphological information of a restricted form into a distributional clustering algorithmin addition we will use a very limited sort of frequency information since rare words tend to belong to open class categoriesthe input to the algorithm is a sequence of tokens each of which is considered as a sequence of characters in a standard encodingthe rest of this paper is structured as follows we will first discuss the evaluation of the models in some detail and present some simple experiments we have performed here we will then discuss the basic algorithm that is the starting point for our research in section 3then we show how we can incorporate a limited form of morphological information into this algorithm in section 4section 5 presents the results of our evaluations on a number of data sets drawn from typologically distinct languageswe then briefly discuss the use of ambiguous models or soft clustering in section 6 and then finish with our conclusions and proposals for future worka number of different approaches to evaluation have been proposed in the pastfirst early work used an informal evaluation of manually comparing the clusters or dendrograms produced by the algorithms with the authors intuitive judgment of the lexical categoriesthis is inadequate for a number of obvious reasons first it does not allow adequate comparison of different techniques and secondly it restricts the languages that can easily be studied to those in which the researcher has competence thus limiting experimentation on a narrow range of languagesa second form of evaluation is to use some data that has been manually or semiautomatically annotated with part of speech tags and to use some information theoretic measure to look at the correlation between the correct data and the induced pos tagsspecifically one could look at the conditional entropy of the gold standard tags given the induced tagswe use the symbol w to refer to the random variable related to the word g for the associated gold standard tag and t for the tag produced by one of our algorithmsrecall that thus low conditional entropy means that the mutual information between the gold and induced tags will be highif we have a random set of tags the mutual information will be zero and the conditional entropy will be the same as the entropy of the tag setagain this approach has several weaknesses there is not a unique welldefined set of partofspeech tags but rather many different possible sets that reflect rather arbitrary decisions by the annotatorsto put the scores we present below in context we note that using some data sets prepared for the amalgam project the conditional entropies between some data manually tagged with different tag sets varied from 022 to 13 secondly because of the zipfian distribution of word frequencies simple baselines that assign each frequent word to a different class can score rather highly as we shall see belowa third evaluation is to use the derived classification in a classbased language model and to measure the perplexity of the derived modelhowever it is not clear that this directly measures the linguistic plausibility of the classificationin particular many parts of speech represent longdistance combinatorial properties and a simple finitestate model with local context will not measure thiswe can also compare various simple baselines to see how they perform according to these simple measuresfrequent word baseline take the n 1 most frequent words and assign them each to a separate class and put all remaining words in the remaining classword baseline each word is in its own classwe performed experiments on parts of the wall street journal corpus using the corpus tagswe chose sections 0 19 a total of about 500000 wordstable 1 shows that the residual conditional entropy with the word baseline is only 012this reflects lexical ambiguityif all of the words were unambiguous then the conditional entropy of the tag given the word would be zerowe are therefore justified in ignoring ambiguity for the moment since it vastly improves the efficiency of the algorithmsclearly as the number of clusters increases the conditional entropy will decrease as is demonstrated belowthe basic methods here have been studied in detail by and we assume a vocabulary of words v w1 our task is to learn a deterministic clustering that is to say a class membership function g from v into the set of class labels nthis clustering can be used to define a number of simple statistical modelsthe objective function we try to maximise will be the likelihood of some model ie the probability of the data with respect to the modelthe simplest candidate for the model is the class bigram model though the approach can also be extended to class trigram modelssuppose we have a corpus of length n wnwe can assume an additional sentence boundary tokenthen the class bigram model defines the probability of the next word given the history as p pp1g it is not computationally feasible to search through all possible partitions of the vocabulary to find the one with the highest value of the likelihood we must therefore use some search algorithm that will give us a local optimumwe follow and use an exchange algorithm similar to the kmeans algorithm for clusteringthis algorithm iteratively improves the likelihood of a given clustering by moving each word from its current cluster to the cluster that will give the maximum increase in likelihood or leaving it in its original cluster if no improvement can be foundthere are a number of different ways in which the initial clustering can be chosen it has been found and our own experiments have tended to confirm this that the initialisation method has little effect on the final quality of the clusters but can have a marked effect on the speed of convergence of the algorithma more important variation for our purposes is how the rare words are treated leave all words with a frequency of less than 5 in a particular class from which they may not be movedthe second sort of information is information about the sequence of letters or phones that form each wordto take a trivial example if we encounter an unknown word say 212000 then merely looking at the sequence of characters that compose it is enough to enable us to make a good guess as to its part of speechless trivially if a word in english ends in ing then it is quite likely to be a present participlewe can distinguish this sort of information which perhaps could better be called orthotactic or phonotactic information from a richer sort which incorporates relational information between the words thus given a novel word that ends in quotingquot such as quotderailingquot one could use the information that we had already seen the token quotderailedquot as additional evidenceone way to incorporate this simple source of information would be to use a mixture of string models alone without distributional evidencesome preliminary experiments not reported here established that this approach could only separate out the most basic differences such as sequences of numbersa more powerful approach is to combine the distributional information with the morphological information by composing the neyessen clustering model with a model for the morphology within a bayesian frameworkwe use the same formula for the probability of the data given the model but include an additional term for the probability of the model that depends on the strings used in each clusterwe wish to bias the algorithm so that it will put words that are morphologically similar in the same clusterwe can consider thus a generative process that produces sets of clusters as used beforeconsider the vocabulary v to be a subset of e where e is the set of characters or phonemes used and let the model have for each cluster i a distribution over e say p then we define the probability of the partition as ignoring irrelevant normalisation constantsthis will give a higher probability to partitions where morphologically similar strings are in the same clusterthe models we will use here for the cluster dependent word string probabilities will be letter hidden markov models we decided to use hmms rather than more powerful models such as character trigram models because we wanted models that were capable of modelling properties of the whole string though in english and in other european languages local statistics such as those used by ngram models are adequate to capture most morphological regularities in other languages this is not the casemoreover we wish to have comparatively weak models otherwise the algorithm will capture irrelevant orthotactic regularities such as a class of words starting with quotstquot in englishin addition we can modify this to incorporate information about frequencywe know that rare words are more likely to be nouns proper nouns or members of some other open word class rather than say pronouns or articleswe can do this simply by adding prior class probabilities ai to the above equation giving we can use the maximum likelihood estimates for ozi which are just the number of distinct types in cluster i divided by the total number of types in the corpusthis just has the effect of discriminating between classes that will have lots of types and clusters that tend to have few types it is possible that in some languages there might be more subtle category related frequency effects that could benefit from more complex models of frequencywe used texts prepared for the multexteast project which consists of data in seven languages the original english together with romanian czech slovene bulgarian estonian and hungarianthese are summarised in table 2as can be seen they cover a wide range of language families furthermore bulgarian is written in cyrillic which slightly stretches the rangetokentype ratios range from 121 for english to 484 for hungarianthe tags used are extremely finegrained and incorporate a great deal of information about case gender and so on in hungarian for example 400 tags are used with 86 tags used only oncetable 3 shows the result of our crosslinguistic evaluation on this datasince the data sets are so small we decided to use the conditional entropy evaluationhere do refers to the distributional clustering algorithm where all words are clustered d5 leaves all words with frequency at most 5 in a seperate cluster dm uses morphological information as well df uses frequency information and dmf uses morphological and frequency informationwe evaluated it for all words and also for words with frequency at most 5we can see that the use of morphological information consistently improves the results on the rare words by a substantial marginin some cases however a simpler algorithm performs better when all the words are considered notably in slovene and estonianwe have also evaluated this method by comparing the perplexity of a classbased language model derived from these classeswe constructed a class bigram model using absolute interpolation with a singleton generalised distribution for the transition weights and using absolute discounting with backing off for the membershipoutput function we trained the model on sections 0009 of the penn treebank and tested it on sections 10 l 9 we used the full vocabulary of the training and test sets together which was 45679 of which 14576 had frequency zero in the training data and thus had to be categorised based solely on their morphology and frequencywe did not reduce the vocabulary or change the capitalization in any waywe compared different models with varying numbers of clusters 32 64 and 128table 4 shows the results of the perplexity evaluation on the wsj dataas can be seen the models incorporating morphological information have slightly lower perplexity on the test data than the d5 modelnote that this is a global evaluation over all the words in the data including words that do not occur in the training data at allfigure 5 shows how the conditional entropy varies with respect to the frequency for these modelsas can be seen the use of morphological information improves the preformance markedly for rare words and that this effect reduces as the frequency increasesnote that the use of the frequency information worsens the performance for rare words according to this evaluation this is because the rare words are much more tightly grouped into just a few clusters thus the entropy of the cluster tags is lowertable 5 shows a qualitative evaluation of some of the clusters produced by the best performing model for 64 clusters on the wsj data setwe selected the 10 clusters with the largest number of zero frequency word types inwe examined each cluster and chose a simple regular expression to describe it and calculated the precision and recall for words of all frequency and for words of zero frequencynote that several of the clusters capture syntactically salient morphological regularities regular verb suffixes noun suffixes and the presence of capitalisation are all detected together with a class for numbersin some cases these are split amongst more than one class thus giving classes with high precision and low recallwe made no attempt to adjust the regular expressions to make these scores high we merely present them as an aid to an intuitive understanding of the composition of these clustersup until now we have considered only hard clusters where each word is unambiguously assigned to a single classclearly because of lexical ambiguity we would like to be able to assign some words to more than one classthis is sometimes called soft clusteringspace does not permit an extensive analysis of the situationwe shall therefore report briefly on some experiments we have performed and our conclusions largely leaving this as an area for future research have presented models that account for ambiguity to some extentthe most principled way is to use hidden markov models these provide the formal and technical apparatus required to train when the tags might be ambiguous presents this idea together with a simple evaluation on englishwe therefore extend our approach to allow ambiguous words by changing our model from a deterministic to nondeterministic modelin this situation we want the states of the hmm to correspond to syntactic categories and use the standard expectationmaximization algorithm to train itto experiment with this we chose fullyconnected randomly initialized hidden markov models with determined start and end stateswe trained the model on the various sentences in the model on wsj datawith 5 substates 20 iterations corpus and then tagged the data with the most likely tag sequencewe then evaluated the conditional entropy of the gold standard tags given the derived hmm tagstable 6 shows the results of this evaluation on some english data for various numbers of statesas can be seen increasing the number of states of the model does not reduce the conditional entropy of the gold standard tags rather it increases the lexical ambiguity of the model hthis is because the states of the hmm will not necessarily correspond directly to syntactic categories rather they correspond to sets of words that occur in particular positions for example the model might have a state that corresponds to a noun that occurs before a main verb and a separate state that corresponds to a noun after a main verbone explanation for this is that the output function from each state of the hmm is a multinomial distribution over the vocabulary which is too powerful since it can memorise any set of words thus there is no penalty for the same word being produced by many different statesthis suggests a solution that is to replace the multinomial distribution by a weaker distribution such as the hidden markov models we have used beforethis gives us a twolevel hmm a hmm where each state corresponds to a word and where the output function is a hmm where each state corresponds to a letterthis relates to two other approaches that we are aware of and table 7 shows a simple evaluation of this approach we can see that this does not suffer from the same drawback as the previous approach though the results are still poor compared to the other approaches and in fact are consistently worse than the baselines of table 1the problem here is that we are restricted to using quite small hmms which are insufficiently powerful to memorise large chunks of the vocabulary and in addition the use of the forwardbackward algorithm is more computationally expensive by at least a factor of the number of stateswe have applied several different algorithms to the task of identifying parts of speechwe have demonstrated that the use of morphological information can improve the performance of the algorithm with rare words quite substantiallywe have also demonstrated that a very simple use of frequency can provide further improvementsadditionally we have tested this on a wide range of languagesintuitively we have used all of the different types of information available when we encounter a new word we know three things about it first the context that it has appeared in secondly the string of characters that it is made of and thirdly that it is a new word and therefore rarewe have so far used only a limited form of morphological information that relies on properties of individual strings and does not relate particular strings to each otherwe plan to use this stronger form of information using pair hidden markov models as described in
E03-1009
combining distributional and morphological information for part of speech inductionin this paper we discuss algorithms for clustering words into classes from unlabelled text using unsupervised algorithms based on distributional and morphological informationwe show how the use of morphological information can improve the performance on rare words and that this is robust across a wide range of languageswe propose a perplexity based test for the quality of the pos induction algorithmwe find that manyto1 accuracy has several defects
investigating gis and smoothing for maximum entropy taggers this paper investigates two elements of maximum entropy tagging the use of a correction feature in the generalised iterative scaling estimation algorithm and techniques for model smoothing we show analytically and empirically that the correction feature assumed to be required for the correctof unnecessary we also explore the use of a gaussian prior and a simple cutoff for smoothing the experiments are performed with two tagsets standard penn treebank and the larger set of lexical types from the use of maximum entropy models has become popular in statistical nlp some example applications include partofspeech tagging parsing and language modelling many tagging problems have been successfully modelled in the me framework including pos tagging with state of the art performance quotsupertaggingquot and chunking generalised iterative scaling is a very simple algorithm for estimating the parameters of a me modelthe original formulation of gis required the sum of the feature values for each event to be constantsince this is not the case for many applications the standard method is to add a quotcorrectionquot or quotslackquot feature to each event improved iterative scaling eliminated the correction feature to improve the convergence rate of the algorithmhowever the extra book keeping required for us means that gis is often faster in practice this paper shows by a simple adaptation of berger proof for the convergence of hs that gis does not require a correction featurewe also investigate how the use of a correction feature affects the performance of me taggersgis and hs obtain a maximum likelihood estimate of the parameters and like other mle methods are susceptible to overfittinga simple technique used to avoid overfitting is a frequency cutoff in which only frequently occurring features are included in the model however more sophisticated smoothing techniques exist such as the use of a gaussian prior on the parameters of the model this technique has been applied to language modelling text classification and parsing but to our knowledge it has not been compared with the use of a feature cutoffwe explore the combination of gaussian smoothing and a simple cutoff for two tagging tasksthe two taggers used for the experiments are a pos tagger trained on the wsj penn treebank and a quotsupertaggerquot which assigns tags from the much larger set of lexical types from combinatory categorial grammar elimination of the correction feature and use of appropriate smoothing methods result in state of the art performance for both tagging tasksa conditional me model also known as a loglinear model has the following form where the functions fi are the features of the model the a are the parameters or weights and z is a normalisation constantthis form can be derived by choosing the model with maximum entropy from a set of models that satisfy a certain set of constraintsthe constraints are that the expected value of each feature fi according to the model p is equal to some value ki calculating the expected value according to p requires summing over all contexts x which is not possible in practicetherefore we use the now standard approximation where p is the relative frequency of context x in the datathis is convenient because p is zero for all those events not seen in the training datafinding the maximum entropy model that satisfies these constraints is a constrained optimisation problem which can be solved using the method of lagrange multipliers and leads to the form in where the ai are the lagrange multipliersa natural choice for ki is the empirical expected value of the feature fi xo an alternative motivation for this model is that starting with the loglinear form in and deriving mles we arrive at the same solution as the me model which satisfies the constraints in gis is a very simple algorithm for estimating the parameters of a me modelthe algorithm is as follows where e p f is the empirical expected value of j and e p fi is the expected value according to model p in practice c is maximised over the pairs in the training data although in theory c can be any constant greater than or equal to the figure in however since determines the rate of convergence of the algorithm it is preferable to keep c as small as possiblethe original formulation of gis required the sum of the feature values for each event to be constantsince this is not the case for many applications the standard method is to add a quotcorrectionquot or quotslackquot feature to each event defined as follows for our tagging experiments the use of a correction feature did not significantly affect the resultsmoreover we show in the appendix by a simple adaptation of berger proof for the convergence of hs that gis converges to the maximum likelihood model without a correction feature1 the proof works by introducing a correction feature with fixed weight of 0 into the iis convergence proofthis feature does not contribute to the model and can be ignored during weight updateintroducing this null feature still satisfies jensen inequality which is used to provide a lower bound on the change in likelihood between iterations and the existing gis weight update can still be derived analyticallyan advantage of gis is that it is a very simple algorithm made even simpler by the removal of the correction featurethis simplicity means that although gis requires more iterations than 11s to reach convergence in practice it is significantly faster several methods have been proposed for smoothing me models for taggers a standard technique is to eliminate low frequency features based on the assumption that they are unreliable or uninformative studies of infrequent features in other domains suggest this assumption may be incorrect we test this for me taggers by replacing the cutoff with the use of a gaussian prior a technique which works well for language models when using a gaussian prior the objective function is no longer the likelihood l but has the form 2omaximising this function is a form of maximum a posteriori estimation rather than maximum likelihood estimationthe effect of the prior is to penalise models that have very large positive or negative weightsthis can be thought of as relaxing the constraints in so that the model fits the data less exactlythe parameters o are usually collapsed into one parameter which can be set using heldout datathe new update rule for gis with a gaussian prior is found by solving the following equation for the ai update values which can easily be derived from by analogy with the proof in the appendix this equation does not have an analytic solution for si and can be solved using a numerical solver such as newtonraphsonnote that this new update rule is still significantly simpler than that required for 11swe reimplemented ratnaparkhi publicly available pos tagger mxpost and clark ccg supertagger as a starting point for our experimentsccg supertagging is more difficult than pos tagging because the set of quottagsquot assigned by the supertagger is much larger the supertagger assigns ccg lexical categories which encode subcategorisation informationtable 1 gives some examplesthe features used by each tagger are binary valued and pair a tag with various elements of the context for example fi 1 if word the y dt y word the is an example of what ratnaparkhi calls a contextual predicatethe contextual predicates used by the two taggers are given in table 2 where w is the ith word and ti is the ith tagwe insert a special end of sentence symbol at sentence boundaries so that the features looking forwards and backwards are always definedthe supertagger uses pos tags as additional features which clark found improved performance significantly and does not use the morphological features since the pos tags provide equivalent informationfor the supertagger t is the lexical category of the ith wordthe conditional probability of a tag sequence y y given a sentence w wn is approximated as follows where x is the context of the ith wordthe tagger returns the most probable sequence for the sentencefollowing ratnaparkhi beam search is used to retain only the 20 most probable sequences during the tagging process2 we also use a quottag dictionaryquot so that words appearing 5 or more times in the data can only be assigned those tags previously seen with the wordwe develop and test our improved pos tagger using the standard parser development methodology on the penn treebank wsj corpustable 3 shows the number of sentences and words in the training development and test datasetsas well as evaluating the overall accuracy of the taggers we also calculate the accuracy on previously unseen words previously unseen wordtag pairs and ambiguous words that is those with more than one tag over the testing training and development datasetsnote that the unseen wordtag pairs do not include the previously unseen wordswe first replicated the results of the mxpost taggerin doing so we discovered a number of minor variations from ratnaparkhi mxpost uses a cutoff of 1 for the current word feature and 5 for other featureshowever the current word must have appeared at least 5 times with any tag for the current word feature to be included otherwise the word is considered rare and morphological features are included insteadtable 4 shows the performance of mxpost and our reimplementation3 the third row shows a minor improvement in performance when the correction feature is removedwe also experimented with the default contextual predicate but found it had little impact on the performancefor the remainder of the experiments we use neither the correction nor the default featuresthe rest of this section considers various combinations of feature cutoffs and gaussian smoothingwe report optimal results with respect to the smoothing parameter a where a no2 and n is the number of training instanceswe found that using a 2 gave the most benefit to our basic tagger improving performance by about 015 on the development setthis result is shown in the first row of table 5the remainder of table 5 shows a minimal change in performance when the current word and previous word cutoffs are variedthis led us to reduce the cutoffs for all features simultaneouslytable 6 gives results for cutoff values between 1 and 4the best performance is obtained when the cutoffs are eliminated entirelygaussian smoothing has allowed us to retain all of the features extracted from the corpus and reduce overfittingto get more information into the model more features must be extracted and so we investigated the addition of the current word feature for all words including the rare onesthis resulted in a minor improvement and gave the best performance on the development data 9683table 7 shows the final performance on the test set using the best configuration on the development data compared with mxpostthe improvement is 022 overall and 158 for unknown words the obvious cost associated with retaining all the features is the significant increase in model size which slows down both the training and tagging and requires more memorytable 8 shows the difference in the number of contextual predicates and features between the original and final taggersto ensure the robustness of our results we performed 10fold crossvalidation using the whole of the wsj penn treebankthe 24 sections were split into 10 equal components with 9 used for training and 1 for testingthe final result is an average over the 10 different splits given in table 9 where o is the standard deviation of the overall accuracywe also performed 10fold crossvalidation using mxpost and tnt a publicly available markov model po s tagger the difference between mxpost and cc represents a reduction in error rate of 43 and the difference between tnt and cc a reduction in error rate of 108we also compare our performance against other published results that use different training and testing sectionscollins uses wsj 0018 for training and wsj 2224 for testing and toutanova and manning use wsj 0020 for training and wsj 2324 for testingcollins uses a linear perceptron and toutanova and manning use a me tagger also based on mxpostour performance is slightly worse than collins but better than tm we noticed during development that unknown word performance improves with larger a values at the expense of overall accuracy and so using separate cy for different types of contextual predicates may improve performancea similar approach has been shown to be successful for language modelling the lexical categories for the supertagging experiments were extracted from ccgbank a ccg version of the penn treebank following clark all categories that occurred at least 10 times in the training data were used resulting in a tagset of 398 categoriessections 0221 section 00 and section 23 were used for training development and testing as beforeour supertagger used the same configuration as our best performing pos tagger except that the a parameter was again optimised on the development setthe results on section 00 and section 23 are given in tables 11 and 124 cc outperforms clark supertagger by 043 on the test set a reduction in error rate of 49supertagging has the potential to benefit more from gaussian smoothing than pos tagging because the feature space is sparser by virtue of the much larger tagsetgaussian smoothing would also allow us to incorporate rare longer range dependencies as features without risk of overfittingthis may further boost supertagger performancethis paper has demonstrated both analytically and empirically that gis does not require a correction feature eliminating the correction feature simplifies further the already very simple estimation algorithmalthough gis is not as fast as some alternatives such as conjugate gradient and limited memory variable metric methods our cc pos tagger takes less than 10 minutes to train and the space requirements are modest irrespective of the size of the tagsetwe have also shown that using a gaussian prior on the parameters of the me model improves performance over a simple frequency cutoffthe gaussian prior effectively relaxes the constraints on the me model which allows the model to use low frequency features without overfittingachieving optimal performance with gaussian smoothing and without cutoffs demonstrates that low frequency features can contribute to good performancewe would like to thank joshua goodman miles osborne andrew smith hanna wallach tara murphy and the anonymous reviewers for their comments on drafts of this paperthis research is supported by a commonwealth scholarship and a sydney university travelling scholarship to the first author and epsrc grant grm96889kamal nigam john lafferty and andrew mccallum1999using maximum entropy for text classificationin proceedings of the ijcai99 workshop on machine learning for information filtering pages 6167 stockholm swedenadwait ratnaparkhi1996a maximum entropy partofspeech taggerin proceedings of the emnlp conference pages 133142 philadelphia pa adwait ratnaparkhi1998maximum entropy models for natural language ambiguity resolutionphd thesis university of pennsylvaniaadwait ratnaparkhi1999learning to parse natural language with maximum entropy modelsmachine learning 34151175ronald rosenfeld1996a maximum entropy approach to adaptive statistical language modelingcomputer speech and language 10187228mark steedman2000the syntactic processthe mit press cambridge makristina toutanova and christopher d manning2000enriching the knowledge sources used in a maximum entropy partofspeech taggerin proceedings of the emnlp conference hong konghans van halteren jakub zavrel and walter daelemans2001improving accuracy in wordclass tagging through combination of machine learning systemscomputational linguistics 27 199229
E03-1071
investigating gis and smoothing for maximum entropy taggersthis paper investigates two elements of maximum entropy tagging the use of a correction feature in the generalised iterative scaling estimation algorithm and techniques for model smoothingwe show analytically and empirically that the correction feature assumed to be required for the correctness of gis is unnecessarywe also explore the use of a gaussian prior and a simple cutoff for smoothingthe experiments are performed with two tagsets the standard penn treebank pos tagset and the larger set of lexical types from combinatory categorial grammarour supertagger finds the single most probable category sequence given the sentenc and uses additional features defined in terms of the previously assigned categories
empirical methods for compound splitting compounded words are a challenge for nlp applications such as machine translation we introduce methods to learn splitting rules from monolingual and parallel corpora we evaluate them against a gold standard and measure their impact on performance of statistical mt systems results show accuracy of 991 and performance gains for mt of 0039 bleu on a germanenglish noun phrase translation task splitting options for the german word aktionsplan aktionsplan aktion actionplan action plan akt ion s plan act ion plan compounding of words is common in a number of languages since words may be joined freely this vastly increases the vocabulary size leading to sparse data problemsthis poses challenges for a number of nlp applications such as machine translation speech recognition text classification information extraction or information retrievalfor machine translation the splitting of an unknown compound into its parts enables the translation of the compound by the translation of its partstake the word aktionsplan in german which was created by joining the words aktion and planbreaking up this compound would assist the translation into english as action plancompound splitting is a well defined computational linguistics taskone way to define the goal of compound splitting is to break up foreign words so that a onetoone correspondence to english can be establishednote that we are looking for a onetoone correspondence to english content words say the preferred translation of aktionsplan is plan for actionthe lack of correspondence for the english word for does not detract from the definition of the task we would still like to break up the german compound into the two parts aktion and planthe insertion of function words is not our concernultimately the purpose of this work is to improve the quality of machine translation systemsfor instance phrasebased translation systems marcu and wong 2002 may recover more easily from splitting regimes that do not create a onetoone translation correspondenceone splitting method may mistakenly break up the word aktionsplan into the three words akt ion and planbut if we consistently break up the word aktion into akt and ion in our training data such a system will likely learn the translation of the word pair akt ion into the single english word actionthese considerations lead us to three different objectives and therefore three different evaluation metrics for the task of compound splitting for the first objective we compare the output of our methods to a manually created gold standardfor the second and third we provide differently prepared training corpora to statistical machine translation systemswhile the linguistic properties of compounds are widely studied langer 1998 there has been only limited work on empirical methods to split up compounds for specific applicationsbrown 2002 proposes a approach guided by a parallel corpusit is limited to breaking compounds into cognates and words found in a translation lexiconthis lexicon may also be acquired by training a statistical machine translation systemthe methods leads to improved text coverage of an example based machine translation system but no results on translation performance are reportedmonz and de rijke 2001 and hedlund et al 2001 successfully use lexicon based approaches to compound splitting for information retrievalcompounds are broken into either the smallest or the biggest words that can be found in a given lexiconlarson et al 2000 propose a datadriven method that combines compound splitting and word recombination for speech recognitionwhile it reduces the number of outofvocabulary words it does not improve speech recognition accuracymorphological analyzers such as morphix finkler and neumann 19981 usually provide a variety of splitting options and leave it to the subsequent application to pick the best choicecompounds are created by joining existing words togetherthus to enumerate all possible splittings of a compound we consider all splits into known wordsknown words are words that exist in a training corpus in our case the european parliament proceedings consisting of 20 million words of german koehn 2002when joining words filler letters may be inserted at the jointthese are called fugenelemente in germanrecall the example of aktionsplan where the letter s was inserted between aktion and plansince there are no simple rules for when such letters may be inserted we allow them between any two wordsas fillers we allow s and es when splitting german words which covers almost all casesother transformations at joints include dropping of letters such as when schweigen and minute are joined into schweigeminute dropping an n a extensive study of such transformations is carried out by langer 1998 for germanto summarize we try to cover the entire length of the compound with known words and fillers between wordsan algorithm to break up words in such a manner could be implemented using dynamic programming but since computational complexity is not a problem we employ an exhaustive recursive searchto speed up word matching we store the known words in a hash based on the first three lettersalso we restrict known words to words of at least length threefor the word aktionsplan we find the following splitting options we arrive at these splitting options since all the parts aktionsplan aktions aktion akt ion and plan have been observed as whole words in the training corpusthese splitting options are the basis of our workin the following we discuss methods that pick one of them as the correct splitting of the compoundthe more frequent a word occurs in a training corpus the bigger the statistical basis to estimate translation probabilities and the more likely the correct translation probability distribution is learned koehn and knight 20011this insight leads us to define a splitting metric based on word frequencygiven the count of words in the corpus we pick the split s with the highest geometric mean of word frequencies of its parts pi since this metric is purely defined in terms of german word frequencies there is not necessarily a relationship between the selected option and correspondence to english wordsif a compound occurs more frequently in the text than its parts this metric would leave the compound unbroken even if it is translated in parts into englishin fact this is the case for the example aktionsplanagain the four options behind each part we indicated its frequency in parenthesison the right side is the geometric mean score of these frequenciesthe score for the unbroken compound is higher than the preferred choice on the other hand a word that has a simple onetoone correspondence to english may be broken into parts that bear little relation to its meaningwe can illustrate this on the example of freitag which is broken into frei and tag as stated earlier one of our objectives is the splitting of compounds into parts that have onetoone correspondence to englishone source of information about word correspondence is a parallel corpus text in a foreign language accompanied by translations into englishusually such a corpus is provided in form of sentence translation pairsgoing through such a corpus we can check for each splitting option if its parts have translations in the english translation of the sentencein the case of aktionsplan we would expect the words action and plan on the english side but in case of freitag we would not expect the words free and daythis would lead us to break up aktionsplan but not freitagsee figure 2 for illustration of this methodthis approach requires a translation lexiconthe easiest way to obtain a translation lexicon is to learn it from a parallel corpusthis can be done with the toolkit giza alonaizan et al 1999 which establishes wordalignments for the sentences in the two languageswith this translation lexicon we can perform the method alluded to above for each german word we consider all splitting optionsfor each splitting option we check if it has translations on the english sideto deal with noise in the translation table we demand that the translation probability of the english word given the german word be at least 001we also allow each english word to be considered only once if it is taken as evidence for correspondence to the first part of the compound it is excluded as evidence for the other partsif multiple options match the english we select the one with the most splits and use word frequencies as the ultimate tiebreakerwhile this method works well for the examples aktionsplan and freitag it failed in our experiments for words such as grundrechte this word should be broken into the two parts grund and rechtehowever grund translates usually as reason or foundationbut here we are looking for a translation into the adjective basic or fundamentalsuch a translation only occurs when grund is used as the first part of a compoundto account for this we build a second translation lexicon as follows first we break up german words in the parallel corpus with the frequency methodthen we train a translation lexicon using giza from the parallel corpus with split german and unchanged englishsince in this corpus grund is often broken off from a compound we learn the translation table entry grunde4basicby joining the two translation lexicons we can apply the same method but this time we correctly split grundrechteby splitting all the words on the german side of the parallel corpus we acquire a vast amount of splitting knowledge this knowledge contains for instance that grundrechte was split up 213 times and kept together 17 timeswhen making splitting decisions for new texts we follow the most frequent option based on the splitting knowledgeif the word has not been seen before we use the frequency method as a backoffa typical error of the method presented so far is that prefixes and suffixes are often split offfor instance the word folgenden is broken off into folgen and den while this is nonsensical it is easy to explain the word the is commonly found in english sentences and therefore taken as evidence for the existence of a translation for denanother example for this is the word voraussetzung which is split into vor and aussetzungthe word vor translates to many different prepositions which frequently occur in englishto exclude these mistakes we use information about the partsofspeech of wordswe do not want to break up a compound into parts that are prepositions or determiners but only content words nouns adverbs adjectives and verbsto accomplish this we tag the german corpus with pos tags using the tnt tagger brants 2000we then obtain statistics on the partsofspeech of words in the corpusthis allows us to exclude words based on their pos as possible parts of compoundswe limit possible parts of compounds to words that occur most of the time as one of following pos adja adjd adv nn ne ptkneg vvfin vvimp vvinf vvizu vvpp vafin vaimp vainf vapp vmfin vminf vmppthe training set for the experiments is a corpus of 650000 noun phrases and prepositional phrases for each german nppp we have a english translationthis data was extracted from the europarl corpus koehn 20021 with the help of a german and english statistical parserthis limitation is purely for computational reasons since we expect most compounds to be nounsan evaluation of full sentences is expected to show similar resultswe evaluate the performance of the described methods on a blind test set of 1000 nppps which contain 3498 wordsfollowing good engineering practice the methods have been developed with a different development test setthis restrains us from overfitting to a specific test setrecall that our first objective is to break up german words into parts that have a onetoone translation correspondence to english wordsto judge this we manually annotated the test set with correct splitsgiven this gold standard we can evaluate the splits proposed by the methodsthe results of this evaluation are given in table 1the columns in this table mean correct split words that should be split and were split correctly correct non words that should not be split and were not wrong not words that should be split but were not wrong faulty split words that should be split were split but wrongly wrong split words that should not be split but were precision recall accuracy to briefly review the methods raw unprocessed data with no splits eager biggest split ie the split into as many parts as possibleif multiple biggest splits are possible the one with the highest frequency score is taken frequency based split into most frequent words as described in section 4 using parallel split guided by splitting knowledge from a parallel corpus as described in section 5 using parallel and pos as previous with an additional restriction on the pos of split parts as described in section 6 since we developed our methods to improve on this metric it comes as no surprise that the most sophisticated method that employs splitting knowledge from a parallel corpus and information about pos tags proves to be superior with 991 accuracyits main remaining source of error is the lack of training datafor instance it fails on more obscure words such as passagierauficommen where even some of the parts have not been seen in the training corpusthe immediate purpose of our work is to improve the performance of statistical machine translation systemshence we use the splitting methods to prepare training and testing data to optimize the performance of such systemsfirst we measured the impact on a word based statistical machine translation system the widely studied ibm model 4 brown et al 1990 for which training tools alonaizan et al 19991 and decoders germann et al 2001 are freely availablewe trained the system on the 650000 nppps with the giza toolkit and evaluated the translation quality on the same 1000 nppp test set as in the previous sectiontraining and testing data was split consistently in the same waythe translation accuracy is measured against reference translations using the bleu score papineni et al 2002table 2 displays the resultssomewhat surprisingly the frequency based method leads to better translation quality than the more accurate methods that take advantage from knowledge from the parallel corpusone reason for this is that the system recovers more easily from words that are split too much than from words that are not split up sufficientlyof course this has limitations eager splitting into as many parts as possible fares abysmally73 translation quality with phrase based machine translation compound words violate the bias for onetoone word correspondences of word based smt systemsthis is one of the motivations for phrase based systems that translate groups of wordsone of such systems is the joint model proposed by marcu and wong 2002we trained this sysbased statistical machine translation systemthe ability to group split words into phrases overcomes the many mistakes of maximal splitting of words and outperforms the more accurate methods tem with the different flavors of our training data and evaluated the performance as beforetable 3 shows the resultshere the eager splitting method that performed so poorly with the word based smt system comes out aheadthe task of deciding the granularity of good splits is deferred to the phrase based smt system which uses a statistical method to group phrases and rejoin split wordsthis turns out to be even slightly better than the frequency based methodwe introduced various methods to split compound words into partsour experimental results demonstrate that what constitutes the optimal splitting depends on the intended applicationwhile one of our method reached 991 accuracy compared against a gold standard of onetoone correspondences to english other methods show superior results in the context of statistical machine translationfor this application we could dramatically improve the translation quality by up to 0039 points as measured by the bleu scorethe words resulting from compound splitting could also be marked as such and not just treated as regular words as they are nowfuture machine translation models that are sensitive to such linguistic clues might benefit even more
E03-1076
empirical methods for compound splittingcompounded words are a challenge for nlp applications such as machine translation we introduce methods to learn splitting rules from monolingual and parallel corporawe evaluate them against a gold standard and measure their impact on performance of statistical mt systemsresults show accuracy of 991 and performance gains for mt of 0039 bleu on a germanenglish noun phrase translation taskwe present a method requiring no linguistically motivated morphological analysis to split compoundswe split german compound words based on the frequency of the words in the potential decompositions
using encyclopedic knowledge for named entity disambiguation we present a new method for detecting anddisambiguating named entities in open do main text a disambiguation svm kernel is trained to exploit the high coverage and rich structure of the knowledge encoded in an online encyclopedia the resultingmodel significantly outperforms a less in formed baseline 11 motivationthe defacto web search paradigm defines the re sult to a users query as roughly a set of links to the bestmatching documents selected out of billions of items availablewhenever the queries search for pinpointed factual information the burden of filling the gap between the output granularity and the targeted information stays with theusers by browsing the returned documents in or der to find the actually relevant bits of informationa frequent case are queries about named entitieswhich constitute a significant fraction of popu lar web queries according to search engine logswhen submitting queries such as john williamsor python search engine users could also be presented with a compilation of facts and specific at tributes about those named entities rather than aset of bestmatching web pagesone of the chal lenges in creating such an alternative search result page is the inherent ambiguity of the queries as several instances of the same class or different classes may share the same name in the queryas an example the work done during a summer internship at googlecontexts below are part of web documents refer ring to different people who share the same name john williams1john williams and the boston pops conducted a summer star wars concert at tan glewood2john williams lost a taipei death match against his brother axl rotten3john williams won a victoria cross for his actions at the battle of rorkes driftthe effectiveness of the search could be greatly improved if the search results were grouped together according to the corresponding sense rather than presented as a flat sensemixed list of items as an added benefit userswould have easier access to a wider variety of re sults whenever the top 10 or so results returned by the largest search engines happen to refer to only one particular sense of the query thus submerging or hidingdocuments that refer to other senses of the queryin various natural language applications significant performance gains are achieved as a function of data size rather than algorithm complex ity as illustrated by the increasingly popular use of the web as a corpus it seems therefore natural to try to exploit the webin order to also improve the performance of relation extraction ie the discovery of useful re lationships between named entities mentioned in text documentshowever if one wants to combine evidence from multiple web pages then one needs again to solve the name disambiguation problem9without solving it a relation extraction system an alyzing the sentences in the above example could mistakenly consider the third as evidence that john williams the composer fought at rorkes drift12 approachthe main goal of the research reported in this pa per is to develop a named entity disambiguation method that is intrinsically linked to a dictionarymapping proper names to their possible named en titiy denotationsmore exactly the method 1detects whether a proper name refers to a named entity included in the dictionary ties that can be denoted by the same proper name as a departure from the methodology of previous approaches the paper exploits a nontraditionalwebbased resourceconcretely it takes advan tage of some of the human knowledge available in wikipedia a free online encyclopedia createdthrough decentralized collective efforts of thou sands of users we show that the structure of wikipedia lends itself to a set ofuseful features for the detection and disambiguation of named entitiesthe remainder of the pa per is organized as followssection 2 describes wikipedia with an emphasis on the features that are most important to the entity disambiguation tasksection 3 describes the extraction of named entity entries fromwikipediasection 4 introduces two disambigua tion methods which are evaluated experimentally in section 5we conclude with future work and conclusions2 wikipedia a wiki encyclopediawikipedia is a free online encyclopedia writtencollaboratively by volunteers using a wiki soft ware that allows almost anyone to add and change articlesit is a multilingual resource there are about 200 language editions with varying levels of coveragewikipedia is a very dynamic andquickly growing resource articles about news worthy events are often added within days of their occurrenceas an example the september 2005 version contains 751666 articles around 180000 more articles than four months earlierthe work in this paper is based on the english version from may 2005 which contains 577860 articleseach article in wikipedia is uniquely identified by its title a sequence of words separated byunderscores with the first word always capital izedtypically the title is the most common name for the entity described in the articlewhen the name is ambiguous it is further qualified with aparenthetical expressionfor instance the arti cle on john williams the composer has the title john williams because each article describes a specific entity or concept the remainder of the paper sometimes uses the term entityinterchangeably to re fer to both the article and the corresponding entityalso let e denote the entire set of entities from wikipediafor any entity e2e etitle is the title name of the corresponding article and et is the text of the articlein general there is a manytomany correspon dence between names and entitiesthis relationis captured in wikipedia through redirect and dis ambiguation pages as described in the next two sections21 redirect pagesa redirect page exists for each alternative name that can be used to refer to an entity in wikipediathe name is transformed into a title whose article contains aredirect link to the actual article for that en tityfor example john towner williams is the full name of the composer john williamsitis therefore an alternative name for the composer and consequently the article with the ti tle john towner williams is just a pointer to thearticle for john williams an exam ple entry with a considerably higher number of redirect pages is united statesits redirect pages correspond to acronyms spanish translations misspellings or syn onyms for any given wikipedia entity e2e let er be the set of all names that redirect to e 22 disambiguation pagesanother useful structure is that of disambiguation pages which are created for ambiguous names ie names that denote two or more entities in wikipediafor example the disambiguation page for the name john williams lists 22 associated 10 title redirect disambig categories star wars music john williams john towner williams john williams film score composers 20th century classical composers john williams ian rotten john williams professional wrestlers people living in baltimore john williams none john williams british army soldiers british victoria cross recipients boston pops orchestra boston pops pops american orchestras the boston pops orchestra massachusetts musicians united states us usa us usa north american countries united states of america united states republics united states venus venus venus planet venus morning star planets of the solar system evening star planets solar system table 1 examples of wikipedia titles aliases and categories entitiestherefore besides the nonambiguous names that come from redirect pages additionalaliases can be found by looking for all disambiguation pages that list a particular wikipedia en tityin his philosophical article on sense and reference gottlob frege gave a famous argument to show that sense and reference are distinctin his example the planet venus may be referred to using the phrases morning starand evening starthis theoretical example is nicelycaptured in practice in wikipedia by two disam biguation pages morning star and evening star both listing venus as a potential referentfor any given wikipedia entity e 2 e let ed be the set of names whose disambiguation pages contain a link to e 23 categoriesevery article in wikipedia is required to have at least one categoryas shown in table 1 john williams is associated with a set of categories among them star wars music filmscore composers and 20th century classical com poserscategories allow articles to be placed into one or more topicsthese topics can be further categorized by associating them with one or more parent categoriesin table 1 venus is shown asboth an article title and a categoryas a cate gory it has one direct parent planets of the solarsystem which in turn belongs to two more gen eral categories planets and solar systemthus categories form a directed acyclic graph allowingmultiple categorization schemes to coexist simul taneouslythere are in total 59759 categories in wikipediafor a given wikipedia entity e 2e let ec be the set of categories to which e belongs 24 hyperlinksarticles in wikipedia often contain mentions ofentities that already have a corresponding articlewhen contributing authors mention an ex isting wikipedia entity inside an article they arerequired to link at least its first mention to the cor responding article by using links or piped linksboth types of links are exemplified in the follow ing wiki source code of a sentence from the article on italy the vatican cityvatican is now an independent enclave surrounded by romethe string from the second link denotes the title of the referenced articlethe same string is also used in the display versionif the authorwants another string displayed then the alternative string is included in a piped link after the title stringconsequently the display string for the aforementioned example is the vatican is now an independent enclave surrounded by romeas described later in section 4 the hyperlinks can pro vide useful training examples for a named entity disambiguatorwe organize all named entities from wikipedia into a dictionary structure d where each string entry d 2 d is mapped to the set of entities de that can be denoted by d in wikipediathe first step is to identify named entities ie entities with a proper name titlebecause every title inwikipedia must begin with a capital letter the de cision whether a title is a proper name relies on the following sequence of heuristic steps 11 1if etitle is a multiword title check the capitalization of all content words ie wordsother than prepositions determiners con junctions relative pronouns or negationsconsider e a named entity if and only if all content words are capitalized2if etitle is a one word title that contains atleast two capital letters then e is a named en tityotherwise go to step 33 count how many times etitle occurs in thetext of the article in positions other than at the beginning of sentencesif at least 75 of these occurrences are capitalized then e is a named entitythe combined heuristics extract close to half amillion named entities from wikipediathe second step constructs the actual dictionary d as fol lows the set of entries in d consists of all strings that may denote a named entity ie if e2e is a named entity then its title name etitleits redirect names er and its disambigua tion names ed are all added as entries in d each entry string d2d is mapped to de the set of entities that d may denote in wikipediaconsequently a named entity e is included in de if and only if d etitle d 2 er or d2edas illustrated in section 1 the same proper name may refer to more than one named entitythenamed entity dictionary from section 3 and the hy perlinks from wikipedia articles provide a dataset of disambiguated occurrences of proper namesas described in the followingas shown in section 24 each link contains the title name of an en tity and the proper name used to refer to itwe use the term query to denote the occurrence of a proper name inside a wikipedia articleif there is a dictionary entry matching the proper name in the query q such that the set of denoted entities qe contains at least two entities one of them the true answer entity qe then the query q is included in the datasetmore exactly if qe contains n named entities e 1 e 2 e n then the dataset will be augmented with n pairs hq e k i represented as follows hq e k i j qt j e k title the field qt contains all words occurring in a limit length window centered on the proper namethe window size is set to 55 which is the value that was observed to give optimum performance in the related task of crossdocument coreference the kronecker delta function is 1 when e k is the same as the entity qe referred in the linktable 2 lists the query pairs created for the three john williamsqueries from section 11 assuming only three en tities in wikipedia correspond to this namequery text entity title 1 boston pops conduct john williams 0 boston pops conduct john williams 0 boston pops conduct john williams 1 lost taipei match john williams 0 lost taipei match john williams 0 lost taipei match john williams 1 won victoria cross john williams 0 won victoria cross john williams 0 won victoria cross john williams table 2 disambiguation datasetthe application of this procedure on wikipedia results into a dataset of 1783868 disambiguated queries41 contextarticle similarityusing the representation from the previous sec tion the name entity disambiguation problem can be cast as a ranking problemassuming that an appropriate scoring function score is avail able the named entity corresponding to query q is defined to be the one with the highest score e argmax e k score if e qe then e represents a hit otherwise e is a miss disambiguation methods will then differ based on the way they define the scoring functionone ranking function that is evaluated experimen tally in this paper is based on the cosine similarity between the context of the query and the text of the article score cos qt kqtk e k t ke k tk the factors qt and e k t are represented in thestandard vector space model where each compo nent corresponds to a term in the vocabulary and the term weight is the standard tf idf scorethe vo cabulary v is created by reading all wikipedia 12 articles and recording for each word stem w itsdocument frequency df in wikipediastop words and words that are too frequent or too rareare discardeda generic document d is then repre sented as a vector of length jv j with a position for each vocabulary wordif f is the frequency ofword w in document d and n is the total num ber of wikipedia articles then the weight of word w2v in the tf idf representation of d is d w f ln n df 42 taxonomy kernelan error analysis of the cosinebased ranking method reveals that in many cases the pair hq ei fails to rank first even though words from thequery context unambiguously indicate e as the ac tual denoted entityin these cases cue words from the context do not appear in es article due to two main reasons 1the article may be too short or incomplete2even though the article captures most of therelevant concepts expressed in the query con text it does this by employing synonymous words or phrasesthe cosine similarity between q and e k can be seen as an expression of the total degree of correlation between words from the context of query q and a given named entity e k when the correlation is toolow because the wikipedia article for named entity e k does not contain all words that are relevant to e k it is worth considering the correlation between context words and the categories to which e kbe longsfor illustration consider the two queries for the name john williams from figure 1to avoid clutter figure 1 depicts only two enti ties with the name john williams in wikipedia the composer and the wrestleron top of each entity the figure shows one of their wikipedia categories together with some of their ances tor categories in the wikipedia taxonomythe two query contexts are shown at the bottom of the figurein the context on the left words such as conducted and concert denote concepts that are highly correlated with the musicians composers and film score composers categorieson the other hand their correlation with other categories in figure 1 is considerably lowerconsequently a musicians composers film score composers people by occupation people people known in connection with sports and hobbies wrestlers professional wrestlers high correlationshigh correlations conducted a summer star wars john williams john williams a taipei death lost concert matchjohn williams john williams figure 1 wordcategory correlationsgoal of this paper is to design a disambiguationmethod that 1 learns the magnitude of these correlations and 2 uses these correlations in a scor ing function together with the cosine similarityour intuition is that given the query context on the left such a ranking function has a better chance of ranking the composerentity higher than the wrestlerentity when compared with the simple cosine similarity baselinewe consider using a linear ranking function as follows e argmax e k w the feature vector contains a dedicated feature cos for cosine similarity and jv j jcj features wc corresponding to combinations of words w from the wikipedia vocabulary v and categories c from the wikipedia taxonomy c cos cos wc 1 if w2qt and c2e k c 0 otherwise the weight vector w models the magnitude of each wordcategory correlation and can be learned by training on the query dataset described at the beginning of section 4we used the kernel version of the largemargin ranking approach from which solves the optimization 13 problem in figure 2the aim of this formulation is to find a weight vector w such that 1 the number of ranking constraints w w from the training data that are violated is mini mized and 2 the ranking function w generalizes well beyond the training dataminimize v 1 2 ww c p qk subject to w
E06-1002
using encyclopedic knowledge for named entity disambiguationwe present a new method for detecting and disambiguating named entities in open domain texta disambiguation svm kernel is trained to exploit the high coverage and rich structure of the knowledge encoded in an online encyclopediathe resulting model significantly outperforms a less informed baselinewe measure similarity between the textual context of the ne mention and the wikipedia categories of the candidatewe use context matching to link noun phrase subjects into wikipedia
computing consensus translation for multiple machine translation systems using enhanced hypothesis alignment this paper describes a novel method for computing a consensus translation from the outputs of multiple machine translation systems the outputs are combined and a possibly new translation hypothesis can be generated similarly to the wellestablished rover approach of for combining speech recognition hypotheses the consensus translation is computed by voting on a confusion network to create the confusion network we produce pairwise word alignments of the original machine translation hypotheses with an enhanced statistical alignment algorithm that explicitly models word reordering the context of a whole document of translations rather than a single sentence is taken into account to produce the alignment the proposed alignment and voting approach was evaluated on several machine translation tasks including a large vocabulary task the method was also tested in the framework of multisource and speech translation on all tasks and conditions we achieved significant improvements in translation quality increasing e g the bleu score by as much as 15 relative in this work we describe a novel technique for computing a consensus translation from the outputs of multiple machine translation systemscombining outputs from different systems was shown to be quite successful in automatic speech recognition voting schemes like the rover approach of use edit distance alignment and time information to create confusion networks from the output of several asr systemssome research on multiengine machine translation has also been performed in recent yearsthe most straightforward approaches simply select for each sentence one of the provided hypothesesthe selection is made based on the scores of translation language and other models other approaches combine lattices or nbest lists from several different mt systems to be successful such approaches require compatible lattices and comparable scores of the hypotheses in the latticeshowever the scores of most statistical machine translation systems are not normalized and therefore not directly comparablefor some other mt systems the lattices andor scores of hypotheses may not be even available used the edit distance alignment extended to multiple sequences to construct a confusion network from several translation hypothesesthis algorithm produces monotone alignments only it is not able to align translation hypotheses with significantly different word order try to overcome this problemthey introduce a method that allows nonmonotone alignments of words in different translation hypotheses for the same sentencehowever this approach uses many heuristics and is based on the alignment that is performed to calculate a specific mt error measure the performance improvements are reported only in terms of this measurehere we propose an alignment procedure that explicitly models reordering of words in the hypothesesin contrast to existing approaches the context of the whole document rather than a single sentence is considered in this iterative unsupervised procedure yielding a more reliable alignmentbased on the alignment we construct a confusion network from the translation hypotheses similarly to the approach of using global system probabilities and other statistical models the voting procedure selects the best consensus hypothesis from the confusion networkthis consensus translation may be different from the original translationsthis paper is organized as followsin section 2 we will describe the computation of consensus translations with our approachin particular we will present details of the enhanced alignment and reordering procedurea large set of experimental results on several machine translation tasks is presented in section 3 which is followed by a summarythe proposed approach takes advantage of multiple translations for a whole test corpus to compute a consensus translation for each sentence in this corpusgiven a single source sentence in the test corpus we combine m translation hypotheses ei them from m mt engineswe first choose one of the hypotheses them as the primary onewe consider this primary hypothesis to have the correct word orderwe then align and reorder the other secondary hypotheses en to match this word ordersince each hypothesis may have an acceptable word order we let every hypothesis play the role of the primary translation once and thus align all pairs of hypotheses n m in the following subsections we will explain the word alignment procedure the reordering approach and the construction of confusion networksthe word alignment is performed in analogy to the training procedure in smtthe difference is that the two sentences that have to be aligned are in the same languagewe consider the conditional probability pr of the event that given them another hypothesis en is generated from the themthen the alignment between the two hypotheses is introduced as a hidden variable this probability is then decomposed into the alignment probability pr and the lexicon probability pr as in statistical machine translation we make modelling assumptionswe use the ibm model 1 and the hidden markov model to estimate the alignment modelthe lexicon probability of a sentence pair is modelled as a product of singleword based probabilities of the aligned wordsthe training corpus for alignment is created from a test corpus of n sentences translated by all of the involved mt engineshowever the effective size of the training corpus is larger than n since all pairs of different hypotheses have to be alignedthus the effective size of the training corpus is m n the singleword based lexicon probabilities p are initialized with normalized lexicon counts collected over the sentence pairs on this corpussince all of the hypotheses are in the same language we count cooccurring equal words i e if en is the same word as themin addition we add a fraction of a count for words with identical prefixesthe initialization could be furthermore improved by using word classes partofspeech tags or a list of synonymsthe model parameters are trained iteratively in an unsupervised manner with the them algorithm using the giza toolkit the training is performed in the directions en them and them enthe updated lexicon tables from the two directions are interpolated after each iterationthe final alignments are determined using cost matrices defined by the state occupation probabilities of the trained hmm the alignments are used for reordering each secondary translation en and for computing the confusion network with symbol the words of the primary hypothesis are printed in boldthe symbol denotes a null alignment or an earc in the corresponding part of the confusion network alignment wouldwould youyou havelike coffeecoffee oror teatea and wouldwould youyou likelike your coffeecoffee oror tea reordering i wouldwould youyou likelike have some coffeecoffee or teatea would you like coffee or tea confusion would you have coffee or tea network would you like your coffee or i would you like have some coffee tea the alignment between en and the primary hypothesis et used for reordering is computed as a function of words in the secondary translation en with minimal costs with an additional constraint that identical words in en can not be all aligned to the same word in etthis constraint is necessary to avoid that reordered hypotheses with e g multiple consecutive articles the would be produced if fewer articles were used in the primary hypothesisthe new word order for en is obtained through sorting the words in en by the indices of the words in et to which they are alignedtwo words in en which are aligned to the same word in et are kept in the original orderafter reordering each secondary hypothesis en we determine m 1 monotone onetoone alignments between et and en n 1 m n 6 m in case of manytoone connections of words in en to a single word in et we only keep the connection with the lowest alignment coststhe onetoone alignments are convenient for constructing a confusion network in the next step of the algorithmgiven the m1 monotone onetoone alignments the transformation to a confusion network as described by is straightforwardit is explained by the example in figure 1here the original 4 hypotheses are shown followed by the alignment of the reordered secondary hypotheses 24 with the primary hypothesis 1the alignment is shown with the symbol and the words of the primary hypothesis are to the right of this symbolthe symbol denotes a null alignment or an earc in the corresponding part of the confusion network which is shown at the bottom of the figurenote that the word have in translation 2 is aligned to the word like in translation 1this alignment is acceptable considering the two translations alonehowever given the presence of the word have in translation 4 this is not the best alignmentyet the problems of this type can in part be solved by the proposed approach since every translation once plays the role of the primary translationfor each sentence we obtain a total of m confusion networks and unite them in a single latticethe consensus translation can be chosen among different alignment and reordering paths in this latticethe voting on the union of confusion networks is straightforward and analogous to the rover systemwe sum up the probabilities of the arcs which are labeled with the same word and have the same start and the same end statethese probabilities are the global probabilities assigned to the different mt systemsthey are manually adjusted based on the performance of the involved mt systems on a heldout development setin general a better consensus translation can be produced if the words hypothesized by a betterperforming system get a higher probabilityadditional scores like word confidence measures can be used to score the arcs in the latticein the final step the consensus translation is extracted as the best path from the union of confusion networksnote that the extracted consensus translation can be different from the original m translationsalternatively the nbest hypotheses can be extracted for rescoring by additional modelswe performed experiments with both approachessince m confusion networks are used the lattice may contain two best paths with the same probability the same words but different word orderwe extended the algorithm to favor more wellformed word sequenceswe assign a higher probability to each arc of the primary translation in each of the m confusion networksexperimentally this extension improved translation fluency on some tasksthe alignment and voting algorithm was evaluated on both small and large vocabulary tasksinitial experiments were performed on the iwslt 2004 chineseenglish and japaneseenglish tasks the data for these tasks come from the basic travel expression corpus consisting of tourismrelated sentenceswe combined the outputs of several mt systems that had officially been submitted to the iwslt 2004 evaluationeach system had used 20k sentence pairs from the btec corpus for trainingexperiments with translations of automatically recognized speech were performed on the btec italianenglish task here the involved mt systems had used about 60k sentence pairs for trainingfinally we also computed consensus translation from some of the submissions to the tcstar 2005 evaluation campaign the tcstar participants had submitted translations of manually transcribed speeches from the european parliament plenary sessions in our experiments we used the translations from spanish to englishthe mt engines for this task had been trained on 12m sentence pairs table 1 gives an overview of the test corpora on which the enhanced hypotheses alignment was computed and for which the consensus translations were determinedthe official iwslt04 test corpus was used for the iwslt 04 tasks the cstar03 test corpus was used for the speech translation taskthe march 2005 test corpus of the tcstar evaluation was used for the epps taskin table 1 the number of running words in english is the average number of running words in the hypotheses from which the consensus translation was computed the vocabulary of english is the merged vocabulary of these hypothesesfor the btec iwslt04 corpus the statistics for english is given for the experiments described in sections 33 and 35 respectivelywellestablished objective evaluation measures like the word error rate positionindependent word error rate and the bleu score were used to assess the translation qualityall measures were computed with respect to multiple reference translationsthe evaluation was caseinsensitive without considering the punctuation marksdifferent applications of the proposed combination method have been evaluatedfirst we focused on combining different mt systems which have the same source and target languagethe initial experiments were performed on the btec chineseenglish taskwe combined translations produced by 5 different mt systemstable 2 shows the performance of the best and the worst of these systems in terms of the bleu scorethe results for the consensus translation show a dramatic improvement in translation qualitythe word error rate is reduced e g from 546 to 478the research group which had submitted the best translation in 2004 translated the same test set a year later with an improved systemwe compared the consensus translation with this new translation it can be observed that the consensus translation based on the mt systems developed in 2004 is still superior to this 2005 single system translation in terms of all error measureswe also checked how many sentences in the consensus translation of the test corpus are different from the 5 original translations185 out of 500 sentences had new translationscomputing the error measures on these sentences only we observed significant improvements in wer and per and a small improvement in bleu with respect to the original translationsthus the quality of previously unseen consensus translations as generated from the original translations is acceptablein this experiment the global system probabilities for scoring the confusion networks were tuned manually on a development setthe distribution was 035 025 02 01 01 with 035 for the words of the best single system and 01 for the words of the worst single systemwe observed that the consensus translation did not change significantly with small perturbations of these valueshowever the relation between the probabilities is very important for good performanceno improvement can be achieved with a uniform probability distribution it is necessary to penalize translations of low qualitythe improvements in translation quality are also significant on the tcstar epps spanishenglish taskhere we combined four different systems which performed best in the tcstar 2005 evaluation see table 3compared to the best performing single system the consensus hypothesis reduces the wer from 410 to 391this result is further improved by rescoring the nbest lists derived from the confusion networks for rescoring a word penalty feature the ibm model 1 and a 4gram target language model were includedthe linear interpolation weights of these models and the score from the confusion network were optimized on a separate development set with respect to word error ratetable 4 gives examples of improved translation quality by using the consensus translation as derived from the rescored nbest listsin the iwslt 2004 evaluation the english reference translations for the chineseenglish and japaneseenglish test corpora were the same except for a permutation of the sentencesthus we could combine mt systems which have different source and the same target language performing multisource machine translation we combined two japaneseenglish and two chineseenglish systemsthe best performing system was a japaneseenglish system with a bleu score of 447 see table 5by computing the consensus translation we improved this score to 496 and also significantly reduced the error ratesto investigate the potential of the proposed approach we generated the nbest lists of consensus translationsthen for each sentence we selected the hypothesis in the nbest list with the lowest word error rate with respect to the multiple reference translations for the sentencewe then evaluated the quality of these oracle translations with all error measuresin a contrastive experiment for each sentence we simply selected the translation with the lowest wer from the original 4 mt system outputstable 6 shows that the potential for improvement is significantly larger for the consensusbased combination of translation outputs than for simple selection of the best translation1in our future work we plan to improve the scoring of hypotheses in the confusion networks to explore this large potentialsome stateoftheart speech translation systems can translate either the first best recognition hypotheses or the word lattices of an asr systemit has been previously shown that word lattice input generally improves translation qualityin practice however the translation system may choose for some sentences the paths in the lattice with many recognition errors and thus produce inferior translationsthese translations can be improved if we compute a consensus translation from the output of at least two different speech translation systemsfrom each system we take the translation of the single best asr output and the translation of the asr word latticetwo different statistical mt systems capable of translating asr word lattices have been compared by both systems produced translations of better quality on the btec italianenglish speech translation task when using lattices instead of single best asr outputwe obtained the output of each of the two systems under each of these translation scenarios on the cstar03 test corpusthe firstbest recognition word error rate on this corpus is 223the objective error measures for the 4 translation hypotheses are given in table 7we then computed a consensus translation of the 4 outputs with the proposed methodthe better performing word lattice translations were given higher system probabilitieswith the consensus hypothesis the word error rate went down from 295 to 285thus the negative effect of recognition errors on the translation quality was further reducedin this work we proposed a novel theoretically wellfounded procedure for computing a possibly new consensus translation from the outputs of multiple mt systemsin summary the main conthe btec italianenglish task through computing consensus translations from the output of two speech translation systems with different types of source language input tributions of this work compared to previous approaches are as follows the words of the original translation hypotheses are aligned in order to create a confusion networkthe alignment procedure explicitly models word reordering a test corpus of translations generated by each of the systems is used for the unsupervised statistical alignment trainingthus the decision on how to align two translations of a sentence takes the whole document context into account plied in speech translation in order to cope with the negative impact of speech recognition errors on translation accuracyan important feature of a reallife application of the proposed alignment technique is that the lexicon and alignment probabilities can be updated with each translated sentence andor textthus the correspondence between words in different hypotheses and consequently the consensus translation can be improved overtimethis paper is based upon work supported by the defense advanced research projects agency under contract nohr001106c0023this work was also in part funded by the european union under the integrated project tcstar technology and corpora for speech to speech translation
E06-1005
computing consensus translation for multiple machine translation systems using enhanced hypothesis alignmentthis paper describes a novel method for computing a consensus translation from the outputs of multiple machine translation systemsthe outputs are combined and a possibly new translation hypothesis can be generatedsimilarly to the wellestablished rover approach of for combining speech recognition hypotheses the consensus translation is computed by voting on a confusion networkto create the confusion network we produce pairwise word alignments of the original machine translation hypotheses with an enhanced statistical alignment algorithm that explicitly models word reorderingthe context of a whole document of translations rather than a single sentence is taken into account to produce the alignmentthe proposed alignment and voting approach was evaluated on several machine translation tasks including a large vocabulary taskthe method was also tested in the framework of multisource and speech translationon all tasks and conditions we achieved significant improvements in translation quality increasing eg the bleu score by as much as 15 relativewe align synonyms and different morphological forms of words to each other but this is done implicitly relying on the parallel text to learn word alignmentswe use pairwise alignment algorithms based on symmetric alignments from a hmm alignment modeldifferent word orderings are taken into account by training alignment models by considering all hypothesis pairs as a parallel corpus using giza we propose using a statistical word alignment algorithm as a more robust way of aligning outputs into a confusion network for system combination
online learning of approximate dependency parsing algorithms in this paper we extend the maximum spanning tree dependency parsing framework of mcdonald et al to incorporate higherorder feature representations and allow dependency structures with multiple parents per word we show that those extensions can make the mst framework computationally intractable but that the intractability can be circumvented with new approximate parsing algorithms we conclude with experiments showing that discriminative online learning using those approximate algorithms achieves the best reported parsing accuracy for czech and danish dependency representations of sentences model headdependent syntactic relations as edges in a directed graphfigure 1 displays a dependency representation for the sentence john hit the ball with the batthis sentence is an example of a projective tree representation in which all edges can be drawn in the plane with none crossingsometimes a nonprojective representations are preferred as in the sentence in figure 21 in particular for freerword order languages nonprojectivity is a common phenomenon since the relative positional constraints on dependents is much less rigidthe dependency structures in figures 1 and 2 satisfy the tree constraint they are weakly connected graphs with a unique root node and each nonroot node has a exactly one parentthough trees are more common some formalisms allow for words to modify multiple parents recently mcdonald et al have shown that treating dependency parsing as the search for the highest scoring maximum spanning tree in a graph yields efficient algorithms for both projective and nonprojective treeswhen combined with a discriminative online learning algorithm and a rich feature set these models provide stateoftheart performance across multiple languageshowever the parsing algorithms require that the score of a dependency tree factors as a sum of the scores of its edgesthis firstorder factorization is very restrictive since it only allows for features to be defined over single attachment decisionsprevious work has shown that conditioning on neighboring decisions can lead to significant improvements in accuracy in this paper we extend the mst parsing framework to incorporate higherorder feature representations of boundedsize connected subgraphswe also present an algorithm for acyclic dependency graphs that is dependency graphs in which a word may depend on multiple headsin both cases parsing is in general intractable and we provide novel approximate algorithms to make these cases tractablewe evaluate these algorithms within an online learning framework which has been shown to be robust with respect approximate inference and describe experiments displaying that these new models lead to stateoftheart accuracy for english and the best accuracy we know of for czech and danishdependencytree parsing as the search for the maximum spanning tree in a graph was proposed by mcdonald et al this formulation leads to efficient parsing algorithms for both projective and nonprojective dependency trees with the eisner algorithm and the chuliuedmonds algorithm respectivelythe formulation works by defining the score of a dependency tree to be the sum of edge scores where x x1 xn is an input sentence and y a dependency tree for xwe can view y as a set of tree edges and write e y to indicate an edge in y from word xi to word xjconsider the example from figure 1 where the subscripts index the nodes of the treethe score of this tree would then be we call this firstorder dependency parsing since scores are restricted to a single edge in the dependency treethe score of an edge is in turn computed as the inner product of a highdimensional feature representation of the edge with a corresponding weight vector this is a standard linear classifier in which the weight vector w are the parameters to be learned during trainingwe should note that f can be based on arbitrary features of the edge and the input sequence xgiven a directed graph g the maximum spanning tree problem is to find the highest scoring subgraph of g that satisfies the tree constraint over the vertices v by defining a graph in which the words in a sentence are the vertices and there is a directed edge between all words with a score as calculated above mcdonald et al showed that dependency parsing is equivalent to finding the mst in this graphfurthermore it was shown that this formulation can lead to stateoftheart results when combined with discriminative learning algorithmsalthough the mst formulation applies to any directed graph our feature representations and one of the parsing algorithms rely on a linear ordering of the vertices namely the order of the words in the sentencerestricting scores to a single edge in a dependency tree gives a very impoverished view of dependency parsingyamada and matsumoto showed that keeping a small amount of parsing history was crucial to improving parsing performance for their locallytrained shiftreduce svm parserit is reasonable to assume that other parsing models might benefit from features over previous decisionshere we will focus on methods for parsing secondorder spanning treesthese models factor the score of the tree into the sum of adjacent edge pair scoresto quantify this consider again the example from figure 1in the secondorder spanning tree model the score would be here we use the secondorder score function s which is the score of creating a pair of adjacent edges from word xi to words xk and xjfor instance s is the score of creating the edges from hit to with and from hit to ballthe score functions are relative to the left or right of the parent and we never score adjacent edges that are on different sides of the parent for the adjacent edges from hit to john and ballthis independence between left and right descendants allow us to use a o secondorder projective parsing algorithm as we will see laterwe write s when xj is the first left or first right dependent of word xifor example s is the score of creating a dependency from hit to ball since ball is the first child to the right of hitmore formally if the word xi0 has the children shown in this picture this secondorder factorization subsumes the firstorder factorization since the score function could just ignore the middle argument to simulate firstorder scoringthe score of a tree for secondorder parsing is now where k and j are adjacent sameside children of i in the tree ythe secondorder model allows us to condition on the most recent parsing decision that is the last dependent picked up by a particular word which is analogous to the the markov conditioning of in the charniak parser for projective mst parsing the firstorder algorithm can be extended to the secondorder case as was noted by eisner the intuition behind the algorithm is shown graphically in figure 3 which displays both the firstorder and secondorder algorithmsin the firstorder algorithm a word will gather its left and right dependents independently by gathering each half of the subtree rooted by its dependent in separate stagesby splitting up chart items into left and right components the eisner algorithm only requires 3 indices to be maintained at each step as discussed in detail elsewhere for the secondorder algorithm the key insight is to delay the scoring of edges until pairs 2ordernonprojapprox sentence x x0 xn x0 root weight function s r of dependents have been gatheredthis allows for the collection of pairs of adjacent dependents in a single stage which allows for the incorporation of secondorder scores while maintaining cubictime parsingthe eisner algorithm can be extended to an arbitrary mthorder model with a complexity of o for m 1an mthorder parsing algorithm will work similarly to the secondorder algorithm except that we collect m pairs of adjacent dependents in succession before attaching them to their parentunfortunately secondorder nonprojective mst parsing is nphard as shown in appendix ato circumvent this we designed an approximate algorithm based on the exact o secondorder projective eisner algorithmthe approximation works by first finding the highest scoring projective parseit then rearranges edges in the tree one at a time as long as such rearrangements increase the overall score and do not violate the tree constraintwe can easily motivate this approximation by observing that even in nonprojective languages like czech and danish most trees are primarily projective with just a few nonprojective edges thus by starting with the highest scoring projective tree we are typically only a small number of transformations away from the highest scoring nonprojective treethe algorithm is shown in figure 4the expression yi j denotes the dependency graph identical to y except that xis parent is xi instead shows how h1 creates a dependency to h3 with the secondorder knowledge that the last dependent of h1 was h2this is done through the creation of a sibling item in part in the firstorder model the dependency to h3 is created after the algorithm has forgotten that h2 was the last dependent of what it was in ythe test tree is true iff the dependency graph y satisfies the tree constraintin more detail line 1 of the algorithm sets y to the highest scoring secondorder projective treethe loop of lines 216 exits only when no further score improvement is possibleeach iteration seeks the single highestscoring parent change to y that does not break the tree constraintto that effect the nested loops starting in lines 4 and 5 enumerate all pairsline 6 sets y to the dependency graph obtained from y by changing xjs parent to xiline 7 checks that the move from y to y is valid by testing that xjs parent was not already xi and that y is a treeline 8 computes the score change from y to yif this change is larger than the previous best change we record how this new tree was created after considering all possible valid edge changes to the tree the algorithm checks to see that the best new tree does have a higher scoreif that is the case we change the tree permanently and reenter the loopotherwise we exit since there are no single edge switches that can improve the scorethis algorithm allows for the introduction of nonprojective edges because we do not restrict any of the edge changes except to maintain the tree propertyin fact if any edge change is ever made the resulting tree is guaranteed to be nonprojective otherwise there would have been a higher scoring projective tree that would have already been found by the exact projective parsing algorithmit is not difficult to find examples for which this approximation will terminate without returning the highestscoring nonprojective parseit is clear that this approximation will always terminate there are only a finite number of dependency trees for any given sentence and each iteration of the loop requires an increase in score to continuehowever the loop could potentially take exponential time so we will bound the number of edge transformations to a fixed value m it is easy to argue that this will not hurt performanceeven in freerword order languages such as czech almost all nonprojective dependency trees are primarily projective modulo a few nonprojective edgesthus if our inference algorithm starts with the highest scoring projective parse the best nonprojective parse only differs by a small number of edge transformationsfurthermore it is easy to show that each iteration of the loop takes o time resulting in a o runtime algorithmin practice the approximation terminates after a small number of transformations and we do not need to bound the number of iterations in our experimentswe should note that this is one of many possible approximations we could have madeanother reasonable approach would be to first find the highest scoring firstorder nonprojective parse and then rearrange edges based on second order scores in a similar manner to the algorithm we describedwe implemented this method and found that the results were slightly worsekromann argued for a dependency formalism called discontinuous grammar and annotated a large set of danish sentences using this formalism to create the danish dependency treebank the formalism allows for a word to have multiple parentsexamples include verb coordination in which the subject or object is an argument of several verbs and relative clauses in which words must satisfy dependencies both inside and outside the clausean example is shown in figure 5 for the sentence he looks for and sees elephantshere the pronoun he is the subject for both verbs in the sentence and the noun elephants the corresponding objectin the danish dependency treebank roughly 5 of words have more than one parent which breaks the single parent constraint we have previously required on dependency structureskromann also allows for cyclic dependencies though we deal only with acyclic dependency graphs herethough less common than trees dependency graphs involving multiple parents are well established in the literature unfortunately the problem of finding the dependency structure with highest score in this setting is intractable to create an approximate parsing algorithm for dependency structures with multiple parents we start with our approximate secondorder nonprojective algorithm outlined in figure 4we use the nonprojective algorithm since the danish dependency treebank contains a small number of nonprojective arcswe then modify lines 710 of this algorithm so that it looks for the change in parent or the addition of a new parent that causes the highest change in overall score and does not create a cycle2like before we make one change per iteration and that change will depend on the resulting score of the new treeusing this simple new approximate parsing algorithm we train a new parser that can produce multiple parentsin this section we review the work of mcdonald et al for online largemargin dependency parsingas usual for supervised learning we assume a training set t tt1 consisting of pairs of a sentence xt and its correct dependency representation ytthe algorithm is an extension of the margin infused relaxed algorithm to learning with structured outputs in the present case dependency structuresfigure 6 gives pseudocode for the algorithman online learning algorithm considers a single training instance for each update to the weight vector w we use the common method of setting the final weight vector as the average of the weight vectors after each iteration which has been shown to alleviate overfittingon each iteration the algorithm considers a single training instancewe parse this instance to obtain a predicted dependency graph and find the smallestnorm update to the weight vector w that ensures that the training graph outscores the predicted graph by a margin proportional to the loss of the predicted graph relative to the training graph which is the number of words with incorrect parents in the predicted tree note that we only impose margin constraints between the single highestscoring graph and the correct graph relative to the current weight settingpast work on treestructured outputs has used constraints for the kbest scoring tree or even all possible trees by using factored representations however we have found that a single margin constraint per example leads to much faster training with a negligible degradation in performancefurthermore this formulation relates learning directly to inference which is important since we want the model to set weights relative to the errors made by an approximate inference algorithmthis algorithm can thus be viewed as a largemargin version of the perceptron algorithm for structured outputs collins online learning algorithms have been shown to be robust even with approximate rather than exact inference in problems such as word alignment sequence analysis and phrasestructure parsing this robustness to approximations comes from the fact that the online framework sets weights with respect to inferencein other words the learning method sees common errors due to where y arg maxy s approximate inference and adjusts weights to correct for themthe work of daume and marcu formalizes this intuition by presenting an online learning framework in which parameter updates are made directly with respect to errors in the inference algorithmwe show in the next section that this robustness extends to approximate dependency parsingthe score of adjacent edges relies on the definition of a feature representation fas noted earlier this representation subsumes the firstorder representation of mcdonald et al so we can incorporate all of their features as well as the new secondorder features we now describethe old firstorder features are built from the parent and child words their pos tags and the pos tags of surrounding words and those of words between the child and the parent as well as the direction and distance from the parent to the childthe secondorder features are built from the following conjunctions of word and pos identity predicates xipos xkpos xjpos xkpos xjpos xkword xjword xkword xjpos xkpos xjword where xipos is the partofspeech of the ith word in the sentencewe also include conjunctions between these features and the direction and distance from sibling j to sibling k we determined the usefulness of these features on the development set which also helped us find out that features such as the pos tags of words between the two siblings would not improve accuracywe also ignored features over triples of words since this would explode the size of the feature spacewe evaluate dependencies on per word accuracy which is the percentage of words in the sentence with the correct parent in the tree and on complete dependency analysisin our evaluation we exclude punctuation for english and include it for czech and danish which is the standardto create data sets for english we used the yamada and matsumoto head rules to extract dependency trees from the wsj setting sections 221 as training section 22 for development and section 23 for evaluationthe models rely on partofspeech tags as input and we used the ratnaparkhi tagger to provide these for the development and evaluation setthese data sets are exclusively projective so we only compare the projective parsers using the exact projective parsing algorithmsthe purpose of these experiments is to gauge the overall benefit from including secondorder features with exact parsing algorithms which can be attained in the projective settingresults are shown in table 1we can see that there is clearly an advantage in introducing secondorder featuresin particular the complete tree metric is improved considerablyfor the czech data we used the predefined training development and testing split of the prague dependency treebank and the automatically generated pos tags supplied with the data which we reduce to the pos tag set from collins et al on average 23 of the sentences in the training development and test sets have at least one nonprojective dependency though less than 2 of total edges are actually nonprojectiveresults are shown in table 2mcdonald et al showed a substantial improvement in accuracy by modeling nonprojective edges in czech shown by the difference between two firstorder modelstable 2 shows that a secondorder model provides a comparable accuracy boost even using an approximate nonprojective algorithmthe secondorder nonprojective model accuracy of 852 is the highest reported accuracy for a single parser for these datasimilar results were obtained by hall and novak who take the best output of the charniak parser extended to czech and rerank slight variations on this output that introduce nonprojective edgeshowever this system relies on a much slower phrasestructure parser as its base model as well as an auxiliary reranking moduleindeed our secondorder projective parser analyzes the test set in 16m32s and the nonprojective approximate parser needs 17m03s to parse the entire evaluation set showing that runtime for the approximation is completely dominated by the initial call to the secondorder projective algorithm and that the postprocess edge transformation loop typically only iterates a few times per sentencefor our experiments we used the danish dependency treebank v10the treebank contains a small number of intersentence and cyclic dependencies and we removed all sentences that contained such structuresthe resulting data set contained 5384 sentenceswe partitioned the data into contiguous 8020 trainingtesting splitswe held out a subset of the training data for development purposeswe compared three systems the standard secondorder projective and nonprojective parsing models as well as our modified secondorder nonprojective model that allows for the introduction of multiple parents all systems use goldstandard partofspeech since no trained tagger is readily available for danishresults are shown in figure 3as might be expected the nonprojective parser does slightly better than the projective parser because around 1 of the edges are nonprojectivesince each word may have an arbitrary number of parents we must use precision and recall rather than accuracy to measure performancethis also means that the correct training loss is no longer the hamming lossinstead we use false positives plus false negatives over edge decisions which balances precision and recall as our ultimate performance metricas expected for the basic projective and nonprojective parsers recall is roughly 5 lower than precision since these models can only pick up at most one parent per wordfor the parser that can introduce multiple parents we see an increase in recall of nearly 3 absolute with a slight drop in precisionthese results are very promising and further show the robustness of discriminative online learning with approximate parsing algorithmswe described approximate dependency parsing algorithms that support higherorder features and multiple parentswe showed that these approximations can be combined with online learning to achieve fast parsing with competitive parsing accuracythese results show that the gain from allowing richer representations outweighs the loss from approximate parsing and further shows the robustness of online learning algorithms with approximate inferencethe approximations we have presented are very simplethey start with a reasonably good baseline and make small transformations until the score of the structure convergesthese approximations work because freerword order languages we studied are still primarily projective making the approximate starting point close to the goal parsehowever we would like to investigate the benefits for parsing of more principled approaches to approximate learning and inference techniques such as the learning as search optimization framework of this framework will possibly allow us to include effectively more global features over the dependency structure than those in our current secondorder modelthis work was supported by nsf itr grants 0205448
E06-1011
online learning of approximate dependency parsing algorithmsin this paper we extend the maximum spanning tree dependency parsing framework of mcdonald et al to incorporate higherorder feature representations and allow dependency structures with multiple parents per wordwe show that those extensions can make the mst framework computationally intractable but that the intractability can be circumvented with new approximate parsing algorithmswe conclude with experiments showing that discriminative online learning using those approximate algorithms achieves the best reported parsing accuracy for czech and danishwe propose a secondorder graphbased dependency parsing model which incorporates features from the two kinds of subtreeswe use the viterbi decoding algorithm to achieve o parsing timewe show that nonprojective dependency parsing with horizontal markovization is fnphardwe define a secondorder dependency parsing model in which interactions between adjacent siblings are allowed
making tree kernels practical for natural language learning in recent years tree kernels have been proposed for the automatic learning of natural language applications unfortunately they show an inherent super linear complexity and a lower accuracy than traditional attributevalue methods in this paper we show that tree kernels are very helpful in the processing of natural language as we provide a simple algorithm to compute tree kernels in linear average running time and our study on the classification properties of diverse tree kernels show that kernel combinations always improve the traditional methods experiments with support vector machines on the predicate argument classification task provide empirical support to our thesis in recent years tree kernels have been shown to be interesting approaches for the modeling of syntactic information in natural language tasks eg syntactic parsing relation extraction named entity recognition and semantic parsing the main tree kernel advantage is the possibility to generate a high number of syntactic features and let the learning algorithm to select those most relevant for a specific applicationin contrast their major drawback are the computational time complexity which is superlinear in the number of tree nodes and the accuracy that they produce is often lower than the one provided by linear models on manually designed featuresto solve problem a linear complexity algorithm for the subtree kernel computation was designed in unfortunately the st set is rather poorer than the one generated by the subset tree kernel designed in intuitively an st rooted in a node n of the target tree always contains all ns descendants until the leavesthis does not hold for the ssts whose leaves can be internal nodesto solve the problem a study on different tree substructure spaces should be carried out to derive the tree kernel that provide the highest accuracyon the one hand ssts provide learning algorithms with richer information which may be critical to capture syntactic properties of parse trees as shown for example in on the other hand if the sst space contains too many irrelevant features overfitting may occur and decrease the classification accuracy as a consequence the fewer features of the st approach may be more appropriatein this paper we aim to solve the above problemswe present an algorithm for the evaluation of the st and sst kernels which runs in linear average time and a study of the impact of diverse tree kernels on the accuracy of support vector machines our fast algorithm computes the kernels between two syntactic parse trees in o average time where m and n are the number of nodes in the two treesthis low complexity allows svms to carry out experiments on hundreds of thousands of training instances since it is not higher than the complexity of the polynomial kernel widely used on large experimentation egto confirm such hypothesis we measured the impact of the algorithm on the time required by svms for the learning of about 122774 predicate argument examples annotated in propbank and 37948 instances annotated in framenet regarding the classification properties we studied the argument labeling accuracy of st and sst kernels and their combinations with the standard features the results show that on both propbank and framenet datasets the sstbased kernel ie the richest in terms of substructures produces the highest svm accuracywhen ssts are combined with the manual designed features we always obtain the best figure classifierthis suggests that the many fragments included in the sst space are relevant and since their manual design may be problematic tree kernels provide a remarkable help in feature engineeringin the remainder of this paper section 2 describes the parse tree kernels and our fast algorithmsection 3 introduces the predicate argument classification problem and its solutionsection 4 shows the comparative performance in term of the execution time and accuracyfinally section 5 discusses the related work whereas section 6 summarizes the conclusionsthe kernels that we consider represent trees in terms of their substructures these latter define feature spaces which in turn are mapped into vector spaces egr the associated kernel function measures the similarity between two trees by counting the number of their common fragmentsmore precisely a kernel function detects if a tree subpart belongs to the feature space that we intend to generatefor such purpose the fragment types need to be describedwe consider two important characterizations the subtrees and the subset trees in our study we consider syntactic parse trees consequently each node with its children is associated with a grammar production rule where the symbol at lefthand side corresponds to the parent node and the symbols at righthand side are associated with its childrenthe terminal symbols of the grammar are always associated with the leaves of the treefor example figure 1 illustrates the syntactic parse of the sentence quotmary brought a cat to schoolquotwe define as a subtree any node of a tree along with all its descendantsfor example the line in figure 1 circles the subtree rooted in the np nodea subset tree is a more general structurethe difference with the subtrees is that the leaves can be associated with nonterminal symbolsthe ssts satisfy the constraint that they are generated by applying the same grammatical rule set which generated the original treefor example s n vp is a sst of the tree in figure 1 which has two nonterminal symbols n and vp as leavesgiven a syntactic tree we can use as feature representation the set of all its sts or sstsfor example figure 2 shows the parse tree of the sentence quotmary brought a catquot together with its 6 sts whereas figure 3 shows 10 ssts of the subtree of figure 2 rooted in vpthe high different number of substructures gives an intuitive quantification of the different information level between the two treebased representationsthe main idea of tree kernels is to compute the number of the common substructures between two trees t1 and t2 without explicitly considering the whole fragment spacefor this purpose we slightly modified the kernel function proposed in by introducing a parameter σ which enables the st or the sst evaluationgiven the set of fragments f1 f2 i f we defined the indicator function ii which is equal 1 if the target fi is rooted at node n and 0 otherwisewe define where nt1 and nt2 are the sets of the t1s and t2s nodes respectively and a where σ e 10 11 nc is the number of the children of n1 and cjn is the jth child of the node n note that since the productions are the same nc ncwhen σ 0 a is equal 1 only if bj a 1 ie all the productions associated with the children are identicalby recursively applying this property it follows that the subtrees in n1 and n2 are identicalthus eq1 evaluates the subtree kernelwhen σ 1 a evaluates the number of ssts common to n1 and n2 as proved in additionally we study some variations of the above kernels which include the leaves in the fragment spacefor this purpose it is enough to add the condition 0 if n1 and n2 are leaves and their associated symbols are equal then a 1 to the recursive rule set for the a evaluation we will refer to such extended kernels as stbow and sstbow moreover we add the decay factor λ by modifying steps and as follows1 the computational complexity of eq1 is owe will refer to this basic implementation as the quadratic tree kernel however as observed in this worst case is quite unlikely for the syntactic trees of natural language sentences thus we can design algorithms that run in linear time on averageto compute the kernels defined in the previous section we sum the a function for each pair e nt1 x nt2 when the productions associated with n1 and n2 are different we can avoid to evaluate a since it is 0 n2get next elem get the head element and move the pointer to the next element thus we look for a node pair set np e nt1 x nt2 p p1 where p returns the production rule associated with n to efficiently build np we extract the l1 and l2 lists of the production rules from t1 and t2 sort them in the alphanumeric order and scan them to find the node pairs such that p e l1nl2step may require only o time but if p appears r1 times in t1 and p is repeated r2 times in t2 we need to consider r1 x r2 pairsthe formal algorithm is given in table 1note that the list sorting can be done only once at the data preparation time in o the algorithm shows that the worst case occurs when the parse trees are both generated using only one production rule ie the two internal while cycles carry out nt1xnt2 iterationsin contrast two identical parse trees may generate a linear number of nonnull pairs if there are few groups of nodes associated with the same production rule such approach is perfectly compatible with the dynamic programming algorithm which computes ain fact the only difference with the original approach is that the matrix entries corresponding to pairs of different production rules are not consideredsince such entries contain null values they do not affect the application of the original dynamic programmingmoreover the order of the pair evaluation can be established at run time starting from the root nodes towards the childrenan interesting application of the sst kernel is the classification of the predicate argument structures defined in propbank or framenet figure 4 shows the parse tree of the sentence quotmary brought a cat to schoolquot along with the predicate argument annotation proposed in the propbank projectonly verbs are considered as predicates whereas arguments are labeled sequentially from arg0 to arg9also in framenet predicateargument information is described but for this purpose richer semantic structures called frames are usedthe frames are schematic representations of situations involving various participants properties and roles in which a word may be typically usedframe elements or semantic roles are arguments of predicates called target wordsfor example the following sentence is annotated according to the arrest frame time one saturday night authorities police in brooklyn target apprehended suspect sixteen teenagersthe roles suspect and authorities are specific to the framethe common approach to learn the classification of predicate arguments relates to the extraction of features from the syntactic parse tree of the target sentencein seven different features2 which aim to capture the relation between the predicate and its arguments were proposedfor example the parse tree path of the pair in the syntactic tree of figure 4 is v t vp 1 npit encodes the dependency between the predicate and the argument as a sequence of nonterminal labels linked by direction symbols an alternative tree kernel representation proposed in is the selection of the minimal tree subset that includes a predicate with only one of its argumentsfor example in figure 4 the substructures inside the three frames are the semanticsyntactic structures associated with the three arguments of the verb to bring iesarg0 sarg1 and sargmgiven a feature representation of predicate ar2namely they are phrase type parse tree path predicate word head word governing category position and voice guments we can build an individual onevsall classifier ci for each argument ias a final decision of the multiclassifier we select the argument type argt associated with the maximum value among the scores provided by the ci ie t argmaxis score where s is the set of argument typeswe adopted the ova approach as it is simple and effective as showed in note that the representation in figure 4 is quite intuitive and to conceive it the designer requires much less linguistic knowledge about semantic roles than those necessary to define relevant features manuallyto understand such point we should make a step back before gildea and jurafsky defined the first set of features for semantic role labeling the idea that syntax may have been useful to derive semantic information was already inspired by linguists but from a machine learning point of view to decide which tree fragments may have been useful for semantic role labeling was not an easy taskin principle the designer should have had to select and experiment all possible tree subpartsthis is exactly what the tree kernels can automatically do the designer just need to roughly select the interesting whole subtree and the tree kernel will generate all possible syntactic features from itthe task of selecting the most relevant substructures is carried out by the kernel machines themselvesthe aim of the experiments is twofoldon the one hand we show that the ftk running time is linear on the average case and is much faster than qtkthis is accomplished by measuring the learning time and the average kernel computation timeon the other hand we study the impact of the different tree based kernels on the predicate argument classification accuracywe used two different corpora propbank along with penntree bank 2 and framenetpropbank contains about 53700 sentences and a fixed split between training and testing which has been used in other researches egin this split sections from 02 to 21 are used for training section 23 for testing and sections 1 and 22 as developing setwe considered a total of 122774 and 7359 arguments in training and testing respectivelytheir tree structures were extracted from the penn treebankit should be noted that the main contribution to the global accuracy is given by arg0 arg1 and argmfrom the framenet corpus we extracted all 24558 sentences of the 40 frames selected for the automatic labeling of semantic roles task of senseval 3 we mapped together the semantic roles having the same name and we considered only the 18 most frequent roles associated with verbal predicates for a total of 37948 argumentswe randomly selected 30 of sentences for testing and 70 for trainingadditionally 30 of training was used as a validationsetnote that since the framenet data does not include deep syntactic tree annotation we processed the framenet data with collins parser consequently the experiments on framenet relate to automatic syntactic parse treesthe classifier evaluations were carried out with the svmlighttk software available at httpainlpinfouniroma2itmoschitti which encodes st and sst kernels in the svmlight software we used the default linear and polynomial kernels for the evaluations with the standard features defined in we adopted the default regularization parameter and we tried a few costfactor values to adjust the rate between precision and recall on the validationsetfor the st and sst kernels we derived that the best a were 1 and 04 respectivelythe classification performance was evaluated using the f1 measure3 for the single arguments and the accuracy for the final multiclassifierthis latter choice allows us to compare our results with previous literature work egin this section we compare our fast tree kernel approach with the quadratic tree kernel algorithmthe latter refers to the naive evaluation of eq1 as presented in r ie f1 2p figure 5 shows the learning time4 of the svms using qtk and ftk for the classification of one large argument according to different percentages of training datawe note that with 70 of the training data ftk is about 10 times faster than qtkwith all the training data ftk terminated in 6 hours whereas qtk required more than 1 weekthe above results are quite interesting because they show that we can use tree kernels with svms on huge training sets eg on 122774 instances and the time needed to converge is approximately the one required by svms when using polynomial kernelthis latter shows the minimal complexity needed to work in the dual spaceto study the ftk running time we extracted from penntree bank the first 500 trees5 containing exactly n nodes then we evaluated all 25000 possible tree pairseach point of the figure 6 shows the average computation time on all the tree pairs of a fixed size n in the figures the trend lines which best interpolates the experimental values are also shownit clearly appears that the training time is quadratic as svms have quadratic learning time complexity whereas the ftk running time has a linear behavior the qtk algorithm shows a quadratic running time complexity as expectedin these experiments we investigate which kernel is the most accurate for the predicate argument classificationfirst we run st sst stbow sstbow linear and poly kernels over different trainingset size of propbankfigure 7 shows the learning curves associated with the above kernels for the svmbased multiclassifierwe note that ssts have a higher accuracy than sts bow does not improve either st or sst kernels and in the final part of the plot sst shows a higher gradient than st linear and polythis latter produces the best accuracy 905 in line with the literature findings using standard features and polynomial svms eg8716 in second in tables 2 and 3 we report the results using all available training data on propbank and framenet test sets respectivelyeach row of the two tables shows the f1 measure of the individual classifiers using different kernels whereas the last column illustrates the global accuracy of the multiclassifierwe note that the f1 of the single arguments across the different kernels follows the same behavior of the global multiclassifier accuracyon framenet the bow impact on the st and sst accuracy is higher than on propbank as it produces an improvement of about 15this suggests that to detect semantic roles lexical information is very important bow give a higher contribution as errors in postagging make the word pos fragments less reliable and as the framenet trees are obtained with the collins syntactic parser tree kernels seem robust to incorrect parse treesthird we point out that the polynomial kernel on flat features is more accurate than tree kernels but the design of such effective features required noticeable knowledge and effort on the contrary the choice of subtrees suitable to syntactically characterize a target phenomenon seems a easier task moreover by combining polynomial and sst kernels we can improve the classification accuracy ie tree kernels provide the learning algorithm with many relevant fragments which hardly can be designed by handin fact as many predicate argument structures are quite large they contain many fragmentsfinally to study the combined kernels we applied the k1 γk2 formula where k1 is either the linear or the poly kernel and k2 is the st or the sst kerneltable 4 shows the results of four kernel combinationswe note that sts and ssts improve poly and the linear kernel which uses fewer features than poly is more enhanced by the ssts than sts ielinear takes advantage by the richer feature set of the sstsit should be noted that our results of kernel combinations on framenet are in contrast with where no improvement was obtainedour explanation is that thanks to the fast evaluation of ftk we could carry out an adequate parameterizationrecently several tree kernels have been designedin the following we highlight their differences and propertiesin the sst tree kernel was experimented with the voted perceptron for the parsetree reranking taskthe combination with the original pcfg model improved the syntactic parsingadditionally it was alluded that the average execution time depends on the number of repeated productionsin a linear complexity algorithm for the computation of the st kernel is provided the main idea is the use of the suffix trees to store partial matches for the evaluation of the string kernel this can be used to compute the st fragments once the tree is converted into a stringto our knowledge ours is the first application of the st kernel for a natural language taskin an interesting algorithm that speeds up the average running time is presentedsuch algorithm looks for node pairs that have in common a large number of trees and applies a transformation to the trees rooted in such nodes to make faster the kernel computationthe results show an increase of the speed similar to the one produced by our methodin two kernels over syntactic shallow parser structures were devised for the extraction of linguistic relations eg personaffiliationto measure the similarity between two nodes the contiguous string kernel and the sparse string kernel were usedin such kernels were slightly generalized by providing a matching function for the node pairsthe time complexity for their computation limited the experiments on data set of just 200 news itemsmoreover we note that the above tree kernels are not convolution kernels as those proposed in this articlein a treekernel based on lexicalized tree adjoining grammar for the parsereranking task was proposedsince qtk was used for the kernel computation the high learning complexity forced the authors to train different svms on different slices of training dataour ftk adapted for the ltag tree kernel would have allowed svms to be trained on the whole datain a feature description language was used to extract structural features from the syntactic shallow parse trees associated with named entitiesthe experiments on the named entity categorization showed that when the description language selects an adequate set of tree fragments the voted perceptron algorithm increases its classification accuracythe explanation was that the complete tree fragment set contains many irrelevant features and may cause overfittingin this paper we have shown that tree kernels can effectively be adopted in practical natural language applicationsthe main arguments against their use are their efficiency and accuracy lower than traditional feature based approacheswe have shown that a fast algorithm can evaluate tree kernels in a linear average running time and also that the overall converging time required by svms is compatible with very large data setsregarding the accuracy the experiments with support vector machines on the propbank and framenet predicate argument structures show that the richer the kernel is in term of substructures the higher the accuracy is tree kernels are effective also in case of automatic parse trees and as kernel combinations always improve traditional feature models the best approach is to combine scalarbased and structured based kernelsi would like to thank the ai group at the university of rome tor vergatamany thanks to the eacl 2006 anonymous reviewers roberto basili and giorgio satta who provided me with valuable suggestionsthis research is partially supported by the presto space eu project fp6507336
E06-1015
making tree kernels practical for natural language learningin recent years tree kernels have been proposed for the automatic learning of natural language applicationsunfortunately they show an inherent super linear complexity and a lower accuracy than traditional attributevalue methodsin this paper we show that tree kernels are very helpful in the processing of natural language as we provide a simple algorithm to compute tree kernels in linear average running time and our study on the classification properties of diverse tree kernels show that kernel combinations always improve the traditional methodsexperiments with support vector machines on the predicate argument classification task provide empirical support to our thesiswe introduce a fast implementation of tree kernels where a node pair set is first constructed for those associated with same production rules
determining term subjectivity and term orientation for opinion mining mining a recent subdiscipline of computational linguistics which is concerned not with the topic a document is about but with the opinion it expresses to aid the extraction of opinions from text recent work has tackled the issue determining the subjective terms contained in text ie deciding whether a term that carries opinionated content has a positive or a negative connotation this is believed to be of key importance for identifying the orientation of documents ie determining whether a document expresses a positive or negative opinion about its subject matter we contend that the plain determination of the orientation of terms is not a realistic problem since it starts from the nonrealistic assumption that we already know whether a term is subjective or not this would imply that a linguistic resource that marks terms as subjective or objective is available which is usually not the case in this paper we confront the task of deciding whether a given term has a positive or a negative connotation no subjective connotation at this problem thus subsumes the problem of desubjectivity problem of determining orientation we tackle this problem by testing three different variants of a semisupervised method previously proposed for orientation detection our results show that determining subjectivity is a much harder problem than determining orientation alone opinion mining is a recent subdiscipline of computational linguistics which is concerned not with the topic a document is about but with the opinion it expressesopiniondriven content management has several important applications such as determining critics opinions about a given product by classifying online product reviews or tracking the shifting attitudes of the general public toward a political candidate by mining online forumswithin opinion mining several subtasks can be identified all of them having to do with tagging a given document according to expressed opinion to aid these tasks recent work has tackled the issue of identifying the orientation of subjective terms contained in text ie determining whether a term that carries opinionated content has a positive or a negative connotation examples honest and intrepid have a positive connotation while disturbing and superfluous have a negative connotationthis is believed to be of key importance for identifying the orientation of documents since it is by considering the combined contribution of these terms that one may hope to solve tasks 1 2 and 3 abovethe conceptually simplest approach to this latter problem is probably turneys who has obtained interesting results on task 2 by considering the algebraic sum of the orientations of terms as representative of the orientation of the document they belong to but more sophisticated approaches are also possible implicit in most works dealing with term orientation is the assumption that for many languages for which one would like to perform opinion mining there is no available lexical resource where terms are tagged as having either a positive or a negative connotation and that in the absence of such a resource the only available route is to generate such a resource automaticallyhowever we think this approach lacks realism since it is also true that for the very same languages there is no available lexical resource where terms are tagged as having either a subjective or an objective connotationthus the availability of an algorithm that tags subjective terms as being either positive or negative is of little help since determining if a term is subjective is itself nontrivialin this paper we confront the task of determining whether a given term has a positive connotation or a negative connotation or has instead no subjective connotation at all this problem thus subsumes the problem of deciding between subjective and objective and the problem of deciding between positive and negativewe tackle this problem by testing three different variants of the semisupervised method for orientation detection proposed in our results show that determining subjectivity and orientation is a much harder problem than determining orientation alonethe rest of the paper is structured as followssection 2 reviews related work dealing with term orientation andor subjectivity detectionsection 3 briefly reviews the semisupervised method for orientation detection presented in section 4 describes in detail three different variants of it we propose for determining at the same time subjectivity and orientation and describes the general setup of our experimentsin section 5 we discuss the results we have obtainedsection 6 concludesmost previous works dealing with the properties of terms within an opinion mining perspective have focused on determining term orientationhatzivassiloglou and mckeown attempt to predict the orientation of subjective adjectives by analysing pairs of adjectives extracted from a large unlabelled document setthe underlying intuition is that the act of conjoining adjectives is subject to linguistic constraints on the orientation of the adjectives involved eg and usually conjoins adjectives of equal orientation while but conjoins adjectives of opposite orientationthe authors generate a graph where terms are nodes connected by equalorientation or oppositeorientation edges depending on the conjunctions extracted from the document seta clustering algorithm then partitions the graph into a positive cluster and a negative cluster based on a relation of similarity induced by the edgesturney and littman determine term orientation by bootstrapping from two small sets of subjective seed terms their method is based on computing the pointwise mutual information of the target term t with each seed term ti as a measure of their semantic associationgiven a target term t its orientation value o is given by the sum of the weights of its semantic association with the seed positive terms minus the sum of the weights of its semantic association with the seed negative termsfor computing pmi term frequencies and cooccurrence frequencies are measured by querying a document set by means of the altavista search engine1 with a t query a ti query and a t near ti query and using the number of matching documents returned by the search engine as estimates of the probabilities needed for the computation of pmikamps et al consider instead the graph defined on adjectives by the wordnet2 synonymy relation and determine the orientation of a target adjective t contained in the graph by comparing the lengths of the shortest path between t and the seed term good and the shortest path between t and the seed term bad if the former is shorter than the latter than t is deemed to be positive otherwise it is deemed to be negativetakamura et al determine term orientation according to a spin model ie a physical model of a set of electrons each endowed with one between two possible spin directions and where electrons propagate their spin direction to neighbouring electrons until the system reaches a stable configurationthe authors equate terms with electrons and term orientation to spin directionthey build a neighbourhood matrix connecting each pair of terms if one appears in the gloss of the other and iteratively apply the spin model on the matrix until a minimum energy configuration is reachedthe orientation assigned to a term then corresponds to the spin direction assigned to electronsthe system of kim and hovy tackles orientation detection by attributing to each term a positivity score and a negativity score interestingly terms may thus be deemed to have both a positive and a negative correlation maybe with different degrees and some terms may be deemed to carry a stronger positive orientation than otherstheir system starts from a set of positive and negative seed terms and expands the positive seed set by adding to it the synonyms of positive seed terms and the antonyms of negative seed termsthe system classifies then a target term t into either positive or negative by means of two alternative learningfree methods based on the probabilities that synonyms of t also appear in the respective expanded seed setsa problem with this method is that it can classify only terms that share some synonyms with the expanded seed setskim and hovy also report an evaluation of human intercoder agreementwe compare this evaluation with our results in section 5the approach we have proposed for determining term orientation is described in more detail in section 3 since it will be extensively used in this paperall these works evaluate the performance of the proposed algorithms by checking them against precompiled sets of positive and negative terms ie checking how good the algorithms are at classifying a term known to be subjective into either positive or negativewhen tested on the same benchmarks the methods of have performed with comparable accuracies is much more efficient than the one of and have outperformed the method of by a wide margin and the one by by a very wide marginthe methods described in is also limited by the fact that it can only decide the orientation of adjectives while the method of is further limited in that it can only work on adjectives that are present in wordnetthe methods of are instead difficult to compare with the other ones since they were not evaluated on publicly available datasetsriloff et al develop a method to determine whether a term has a subjective or an objective connotation based on bootstrapping algorithmsthe method identifies patterns for the extraction of subjective nouns from text bootstrapping from a seed set of 20 terms that the authors judge to be strongly subjective and have found to have high frequency in the text collection from which the subjective nouns must be extractedthe results of this method are not easy to compare with the ones we present in this paper because of the different evaluation methodologieswhile we adopt the evaluation methodology used in all of the papers reviewed so far the authors do not test their method on an independently identified set of labelled terms but on the set of terms that the algorithm itself extractsthis evaluation methodology only allows to test precision and not accuracy tout court since no quantification can be made of false negatives in section 5 this will prevent us from drawing comparisons between this method and our ownbaroni and vegnaduzzo apply the pmi method first used by turney and littman to determine term orientation to determine term subjectivitytheir method uses a small set s of 35 adjectives marked as subjective by human judges to assign a subjectivity score to each adjective to be classifiedtherefore their method unlike our own does not classify terms but ranks them according to a subjectivity score on which they evaluate precision at various level of recallterm orientation by semisupervised learning the method we use in this paper for determining term subjectivity and term orientation is a variant of the method proposed in for determining term orientation alonethis latter method relies on training in a semisupervised way a binary classifier that labels terms as either positive or negativea semisupervised method is a learning process whereby only a small subset l c tr of the training data tr are humanlabelledin origin the training data in you tr l are instead unlabelled it is the process itself that labels them automatically by using l as inputthe method of starts from two small seed sets lp and ln of known positive and negative terms respectively and expands them into the two final training sets trp d lp and trn d ln by adding them new sets of terms up and un found by navigating the wordnet graph along the synonymy and antonymy relations3this process is based on the hypothesis that synonymy and antonymy in addition to defining a relation of meaning also define a relation of orientation ie that two synonyms typically have the same orientation and two antonyms typically have opposite orientationthe method is iterative generating two sets trkp and trknat each iteration k where trkp d trk1 p d d tr1 p lp and trkn d trk1 n d d tr1 n lnat iteration k trkp is obtained by adding to trk1 p all synonyms of terms in trk1 pand all antonyms of terms in trk1 n similarly trknis obtained by adding to trk1 n all synonyms of terms in trk1 n and all antonyms of terms in trk1 p if a total of k iterations are performed then tr trkp you trkn the second main feature of the method presented in is that terms are given vectorial representations based on their wordnet glosses for each term ti in tr you te a textual representation of ti is generated by collating all the glosses of ti as found in wordnet4each such representation is converted into vectorial form by standard text indexing techniques and in the present work stop words are removed and the remaining words are weighted by cosinenormalized tfidf no stemming is performed5this representation method is based on the assumption that terms with a similar orientation tend to have similar glosses for instance that the glosses of honest and intrepid will both contain appreciative expressions while the glosses of disturbing and superfluous will both contain derogative expressionsnote that this method allows to classify any term independently of its pos provided there is a gloss for it in the lexical resourceonce the vectorial representations for all terms in trut e have been generated those for the terms in tr are fed to a supervised learner which thus generates a binary classifierthis latter once fed with the vectorial representations of the terms in te classifies each of them as either positive or negativein this paper we extend the method of to the determination of term subjectivity and term orientation altogetherthe benchmark we use for our experiments is the general inquirer lexicon this is a lexicon of terms labelled according to a large set of categories6 each one denoting the presence of a specific trait in the termthe two main categories and the ones we will be concerned with are positivenegative which contain 19152291 terms having a positivenegative orientation in opinion mining research the gi was first used by turney and littman who reduced the list of terms to 16141982 entries afit may have more than one sense dictionaries normally associate one gloss to each sense5several combinations of subparts of a wordnet gloss are tested as textual representations of terms in of all those combinations in the present paper we always use the dgs combination since this is the one that has been shown to perform best in dgs corresponds to using the entire gloss and performing negation propagation on its text ie replacing all the terms that occur after a negation in a sentence with negated versions of the term for details ter removing 17 terms appearing in both categories and reducing all the multiple entries of the same term in a category caused by multiple senses to a single entrylikewise we take all the 7582 gi terms that are not labelled as either positive or negative as being labelled as objective and reduce them to 5009 terms after combining multiple entries of the same term caused by multiple senses to a single entrythe effectiveness of our classifiers will thus be evaluated in terms of their ability to assign the total 8605 gi terms to the correct category among positive negative and objective7similarly to our training set is obtained by expanding initial seed sets by means of wordnet lexical relationsthe main difference is that our training set is now the union of three sets of training terms tr trkp trkn trko obtained by expanding through k iterations three seed sets tr1p tr1n tr1o one for each of the categories positive negative and objective respectivelyconcerning categories positive and negative we have used the seed sets expansion policy and number of iterations that have performed best in the experiments of ie the seed sets tr1p good and tr1 n bad expanded by using the union of synonymy and indirect antonymy restricting the relations only to terms with the same pos of the original terms for a total of k 4 iterationsthe final expanded sets contain 6053 positive terms and 6874 negative termsconcerning the category objective the process we have followed is similar but with a few key differencesthese are motivated by the fact that the objective category coincides with the complement of the union of positive and negative therefore objective terms are more varied and diverse in meaning than the terms in the other two categoriesto obtain a representative expanded set trko we have chosen the seed set tr1o entity and we have expanded it by using along with synonymy and antonymy the wordnet relation of hyponymy and without imposing the restriction that the two related terms must have the same posthese choices are strictly related to each other the term entity is the root term of the largest generalization hierarchy in wordnet with more than 40000 terms thus allowing to reach a very large number of terms by using the hyponymy relation8moreover it seems reasonable to assume that terms that refer to entities are likely to have an objective nature and that hyponyms of an objective term are also objectivenote that at each iteration k a given term t is added to trko only if it does not already belong to either trp or trnwe experiment with two different choices for the tro set corresponding to the sets generated in k 3 and k 4 iterations respectively this yields sets tr3o and tr4o consisting of 8353 and 33870 training terms respectivelywe experiment with three philosophically different learning approaches to the problem of distinguishing between positive negative and objective termsapproach i is a twostage method which consists in learning two binary classifiers the first classifier places terms into either subjective or objective while the second classifier places terms that have been classified as subjective by the first classifier into either positive or negativein the training phase the terms in trkp trkn are used as training examples of category subjectiveapproach ii is again based on learning two binary classifiershere one of them must discriminate between terms that belong to the positive category and ones that belong to its complement while the other must discriminate between terms that belong to the negative category and ones that belong to its complement terms that have been classified both into positive by the former classifier and into by the latter are deemed to be positive and terms that have been classified both into by the former classifier and into negative by the latter are deemed to be negativethe terms that have been classified into both and or into both positive and negative are taken to be objectivein the training phase of approach ii the terms in trkn trko are used as training examples of category and the terms in trkp trko are used as training examples of category approach iii consists instead in viewing positive negative and objective as three categories with equal status and in learning a ternary classifier that classifies each term into exactly one among the three categoriesthere are several differences among these three approachesa first difference of a conceptual nature is that only approaches i and iii view objective as a category or concept in its own right while approach ii views objectivity as a nonexistent entity ie as the absence of subjectivity a second difference is that approaches i and ii are based on standard binary classification technology while approach iii requires multiclass classificationas a consequence while for the former we use wellknown learners for binary classification support vector machines using linear kernels the rocchio learner and its prtfidf probabilistic version for approach iii we use their multiclass versions9before running our learners we make a pass of feature selection with the intent of retaining only those features that are good at discriminating our categories while discarding those which are notfeature selection is implemented by scoring each feature fk by means of the mutual information function defined as and discarding the x features fk that minimize itwe will call x the reduction factornote that the set c1 cm from equation 1 is interpreted differently in approaches i to iii and always consistently with who the categories at stake aresince the task we aim to solve is manifold we will evaluate our classifiers according to two evaluation measures and objective ie in deciding both term orientation and subjectivitywe present results obtained from running every combination of the three approaches to classification described in section 43 the four learners mentioned in the same section five different reduction factors for feature selection and the two different training sets for objective mentioned in section 42we discuss each of these four dimensions of the problem individually for each one reporting results averaged across all the experiments we have run the first and most important observation is that with respect to a pure term orientation task accuracy drops significantlyin fact the best soaccuracy and the best pnoaccuracy results obtained across the 120 different experiments are 676 and 660 respectively this contrasts sharply with the accuracy obtained in on discriminating positive from negative on the same benchmarks and essentially the same algorithmsthis suggests that good performance at orientation detection may not be a ceclcm fefkfk guarantee of good performance at subjectivity detection quite evidently a harder taskthis hypothesis is confirmed by an experiment performed by kim and hovy on testing the agreement of two human coders at tagging words with the positive negative and objective labelsthe authors define two measures of such agreement strict agreement equivalent to our pnoaccuracy and lenient agreement which measures the accuracy at telling negative against the restfor any experiment strict agreement values are then going to be by definition lower or equal than the corresponding lenient onesthe authors use two sets of 462 adjectives and 502 verbs respectively randomly extracted from the basic english word list of the toefl testthe intercoder agreement results show a deterioration in agreement of 1677 for adjectives and 3642 for verbsfollowing this we evaluated our best experiment according to these measures and obtained a strict accuracy value of 660 and a lenient accuracy value of 821 with a relative deterioration of 2439 in line with kim and hovys observation10this confirms that determining subjectivity and orientation is a much harder task than determining orientation alonethe second important observation is that there is very little variance in the results across all 120 experiments average soaccuracy and pnoaccuracy results were 635 and 603 a mere 606 and 864 deterioration from the best results reported abovethis seems to indicate that the levels of performance obtained may be hard to improve upon especially if working in a similar frameworklet us analyse the individual dimensions of the problemconcerning the three approaches to classification described in section 43 approach ii outperforms the other two but by an extremely narrow marginas for the choice of learners on average the best performer is nb but again by a very small margin wrt the otherson average the 10we observed this trend in all of our experiments best reduction factor for feature selection turns out to be 50 but the performance drop we witness in approaching 99 is extremely gracefulas for the choice of trko we note that tro and tr4o elicit comparable levels of performance with the former performing best at soaccuracy and the latter performing best at pnoaccuracyan interesting observation on the learners we have used is that nb prtfidf and svms unlike rocchio generate classifiers that depend on p the prior probabilities of the classes which are normally estimated as the proportion of training documents that belong to ciin many classification applications this is reasonable as we may assume that the training data are sampled from the same distribution from which the test data are sampled and that these proportions are thus indicative of the proportions that we are going to encounter in the test datahowever in our application this is not the case since we do not have a natural sample of training termswhat we have is one humanlabelled training term for each category in positivenegativeobjective and as many machinelabelled terms as we deem reasonable to include in possibly different numbers for the different categories and we have no indication whatsoever as to what the natural proportions among the three might bethis means that the proportions of positive negative and objective terms we decide to include in the training set will strongly bias the classification results if the learner is one of nb prtfidf and svmswe may notice this by looking at table 3 which shows the average proportion of test terms classified as objective by each learner depending on whether we have chosen tro to coincide with tr3o or tr4o note that the former choice means having roughly as many objective training terms as there are positive and negative onestable 3 shows that the more objective training terms there are the more test terms nb prtfidf and svms will classify as objective this is not true for rocchio which is basically unaffected by the variation in size of trowe have presented a method for determining both term subjectivity and term orientation for opinion mining applicationsthis is a valuable advance with respect to the state of the art since past work in this area had mostly confined to determining term orientation alone a task that has limited practical significance in itself given the generalized absence of lexical resources that tag terms as being either subjective or objectiveour algorithms have tagged by orientation and subjectivity the entire general inquirer lexicon a complete generalpurpose lexicon that is the de facto standard benchmark for researchers in this fieldour results thus constitute for this task the first baseline for other researchers to improve uponunfortunately our results have shown that an algorithm that had shown excellent stateoftheart performance in deciding term orientation once modified for the purposes of deciding term subjectivity performs more poorlythis has been shown by testing several variants of the basic algorithm some of them involving radically different supervised learning policiesthe results suggest that deciding term subjectivity is a substantially harder task that deciding term orientation alone
E06-1025
determining term subjectivity and term orientation for opinion miningopinion mining is a recent subdiscipline of computational linguistics which is concerned not with the topic a document is about but with the opinion it expressesto aid the extraction of opinions from text recent work has tackled the issue of determining the orientation of subjective terms contained in text ie deciding whether a term that carries opinionated content has a positive or a negative connotationthis is believed to be of key importance for identifying the orientation of documents ie determining whether a document expresses a positive or negative opinion about its subject matterwe contend that the plain determination of the orientation of terms is not a realistic problem since it starts from the nonrealistic assumption that we already know whether a term is subjective or not this would imply that a linguistic resource that marks terms as subjective or objective is available which is usually not the casein this paper we confront the task of deciding whether a given term has a positive connotation or a negative connotation or has no subjective connotation at all this problem thus subsumes the problem of determining subjectivity and the problem of determining orientationwe tackle this problem by testing three different variants of a semisupervised method previously proposed for orientation detectionour results show that determining subjectivity and orientation is a much harder problem than determining orientation alone
mining wordnet for a fuzzy sentiment sentiment tag extraction from wordnet glosses many of the tasks required for semantic tagging of phrases and texts rely on a list of words annotated with some semanticfeatures we present a method for ex tracting sentimentbearing adjectives fromwordnet using the sentiment tag extrac tion program we did 58 step runs on unique nonintersecting seed lists drawn from manually annotated list ofpositive and negative adjectives and evaluated the results against other manually annotated lists the 58 runs were then col lapsed into a single set of 7 813 unique words for each word we computed a net overlap score by subtracting the totalnumber of runs assigning this word a neg ative sentiment from the total of the runs that consider it positive we demonstrate that net overlap score can be used as ameasure of the words degree of member ship in the fuzzy category of sentimentthe core adjectives which had the high est net overlap scores were identifiedmost accurately both by step and by hu man annotators while the words on the periphery of the category had the lowest scores and were associated with low rates of interannotator agreement many of the tasks required for effective seman tic tagging of phrases and texts rely on a list ofwords annotated with some lexical semantic fea turestraditional approaches to the development of such lists are based on the implicit assumption of classical truthconditional theories of meaningrepresentation which regard all members of a category as equal no element is more of a member than any other in this paper we challenge the applicability of this assump tion to the semantic category of sentiment whichconsists of positive negative and neutral subcate gories and present a dictionarybased sentiment tag extraction program that we use to generate a fuzzy set of english sentimentbearing words for the use in sentiment tagging systems 1the proposed approach based on the fuzzy logic is used here to assign fuzzy sen timent tags to all words in wordnet that is it assigns sentiment tags and a degreeof centrality of the annotated words to the sentiment categorythis assignment is based on word net glossesthe implications of this approach for nlp and linguistic research are discussedset some semantic categories have clear membership of color body parts or professions while others are much more difficult to definethis prompted the developmentof approaches that regard the transition frommem bership to nonmembership in a semantic category as gradual rather than abrupt in this paper we approach the category of sentiment as one of such fuzzy categories wheresome words such as good bad are very central prototypical members while other less central words may be interpreted differently by differ ent peoplethus as annotators proceed from thecore of the category to its periphery word mem 1sentiment tagging is defined here as assigning positivenegative and neutral labels to words according to the senti ment they express209 bership in this category becomes more ambiguous and hence lower interannotator agreement can be expected for more peripheral wordsunder theclassical truthconditional approach the disagree ment between annotators is invariably viewed as a sign of poor reliability of coding and is eliminatedby trainingannotators to code difficult and am biguous cases in some standard waywhile this procedure leads to high levels of interannotator agreement on a list created by a coordinated team of researchers the naturally occurring differencesin the interpretation of words located on the pe riphery of the category can clearly be seen whenannotations by two independent teams are comparedthe table 1 presents the comparison of gi h4 2 and hm study lists of words manuallyannotated with sentiment tags by two different re search teamsgih4 hm list composition nouns verbs adj advadjonly total list size 8 211 1 336 total adjectives 1 904 1 336tags assigned positiv nega tiv or no tag positiveor nega tive adjwith 1 268 1 336 nonneutral tags intersection 774 of gih4 adj of hm agreement on tags 787table 1 agreement between gih4 and hm an notations on sentiment tagsthe approach to sentiment as a category withfuzzy boundaries suggests that the 213 dis agreement between the two manually annotatedlists reflects a natural variability in human annotatorsjudgment and that this variability is related to the degree of centrality andor relative importance of certain words to the category of sen timentthe attempts to address this difference 2the general inquirer list used in this study was manually cleaned to remove duplicate entries for words with same part of speech and sentimentonly the harvard iv4 list component of the whole gi was used in this study sinceother lists included in gi lack the sentiment annotationun less otherwise specified we used the full gih4 list including the neutral words that were not assigned positiv or negativ annotationsin importance of various sentiment markers have crystallized in two main approaches automatic assignment of weights based on some statistical criterion and others or manual annotation the statistical approaches usually employ some quantitative criterion goodnessforfitmeasure in probabil ity of words sentiment given the sentiment if itssynonyms in etc to de fine the strength of the sentiment expressed by aword or to establish a threshold for the member ship in the crisp sets 3 of positive negative andneutral wordsboth approaches have their limi tations the first approach produces coarse results and requires large amounts of data to be reliablewhile the second approach is prohibitively expen sive in terms of annotator time and runs the risk ofintroducing a substantial subjective bias in anno tationsin this paper we seek to develop an approachfor semantic annotation of a fuzzy lexical cate gory and apply it to sentiment annotation of allwordnet wordsthe sections that follow describe the proposed approach used to extract sen timent information from wordnet entries usingstep algo rithm discuss the overall performance of step on wordnet glosses outline the method fordefining centrality of a word to the sentiment cate gory and compare the results of both automatic and manual sentiment annotations to the manuallyannotated gih4 list which was used as a gold standard in this experimentthe comparisons are performed separately for each of the subsets of gih4 that are characterized by adifferent distance from the core of the lexical cat egory of sentimentwordnet entries word lists for sentiment tagging applications can be compiled using different methodsautomatic methods of sentiment annotation at the word level can be grouped into two major categories corpusbased approaches and dictionarybased3we use the term crisp set to refer to traditional non fuzzy sets 210 approachesthe first group includes methods that rely on syntactic or cooccurrence patternsof words in large texts to determine their sentiment and oth ersthe majority of dictionarybased approaches use wordnet information especially synsets and hierarchies to acquire sentimentmarked words or to measure the similarity between candidate words and sentimentbearing words such as good and bad in this paper we propose an approach to sentiment annotation of wordnet entries that was implemented and tested in the semantic tag extrac tion program this approach relies bothon lexical relations provided in wordnet and on the wordnet glossesit builds upon the properties of dic tionary entries as a special kind of structured textsuch lexicographical texts are built to establish se mantic equivalence between the lefthand and therighthand parts of the dictionary entry and there fore are designed to match as close as possible the components of meaning of the wordthey have relatively standard style grammar and syntactic structures which removes a substantial source of noise common to other types of text and finally they have extensive coverage spanning the entire lexicon of a natural languagethe step algorithm starts with a small set of seed words of known sentiment value this list is augmented during thefirst pass by adding synonyms antonyms and hy ponyms of the seed words supplied in wordnetthis step brings on average a 5fold increase in the size of the original list with the accuracy of the resulting list comparable to manual annotations at the second pass the system goes through all wordnet glosses and identifies the entries that contain in their definitions the sentimentbearing words from the extended seed list and adds these head words to the corresponding category positive negative or neutral a third cleanup pass is then performed to partially disambiguate the identified wordnet glosses with brills partofspeech tagger which performs with up to 95 accuracy and eliminates errors introduced into the list by partofspeech ambiguity of some words acquired in pass 1 and from the seed listat this step we also filter outall those words that have been assigned contradict ing positive and negative sentiment values within the same runthe performance of step was evaluated using gih4 as a gold standard while the hm list wasused as a source of seed words fed into the systemwe evaluated the performance of our sys tem against the complete list of 1904 adjectives in gih4 that included not only the words that were marked as positiv negativ but also those that werenot considered sentimentladen by gih4 annota tors and hence were by default considered neutralin our evaluationfor the purposes of the evalua tion we have partitioned the entire hm list into 58nonintersecting seed lists of adjectivesthe re sults of the 58 runs on these nonintersecting seed lists are presented in table 2the table 2 showsthat the performance of the system exhibits sub stantial variability depending on the composition of the seed list with accuracy ranging from 476to 875 percent 110average average run size correct of adj stdev stdev pass 1 103 29 780 105 pass 2 630 377 645 108 pass 3 435 291 712 110 table 2 performance statistics on step runsthe significant variability in accuracy of the runs is attributable to the variability in the properties of the seed list words in these runsthe hm list includes some sentimentmarked words where not all meanings are laden with sentiment but also the words where some meanings are neutral and even the wordswhere such neutral meanings are much more fre quent than the sentimentladen onesthe runswhere seed lists included such ambiguous adjectives were labeling a lot of neutral words as sen timent marked since such seed words were more likely to be found in the wordnet glosses in their more frequent neutral meaningfor example run 53 had in its seed list two ambiguous adjectives 1 dim and plush which are neutral in most of the contextsthis resulted in only 526 accuracy run 48 on theother hand by a sheer chance had only unam biguous sentimentbearing words in its seed list and thus performed with a fairly high accuracy in order to generate a comprehensive list cov ering the entire set of wordnet adjectives the 58 runs were then collapsed into a single set of unique wordssince many of the clearly sentimentladen adjectives that form the core of the category of sentiment were identified by step in multiple runs and had therefore multiple duplicates in thelist that were counted as one entry in the com bined list the collapsing procedure resulted in a loweraccuracy but much larger list of english adjectives marked as positive or neg ative the remainder of wordnets 22 141 adjectives was not found in any step run and hence was deemed neutral overall the systems 665 accuracy on thecollapsed runs is comparable to the accuracy re ported in the literature for other systems run onlarge corpora in order to make a meaningful comparison with the results reported in we also did an evaluation of step results on positives andnegatives only and compared our labels tothe remaining 1266 gih4 adjectivesthe accuracy on this subset was 734 which is compara ble to the numbers reported by turney and littman for experimental runs on 3 596 sentiment marked gi words from different parts of speechusing a 2x109 corpus to compute pointwise mu tual information between the gi words and 14 manually selected positive and negative paradigm words the analysis of step system performancevsgih4 and of the disagreements between man ually annotated hm and gih4 showed that the greatest challenge with sentiment tagging ofwords lies at the boundary between sentimentmarked and sentiment neutral wordsthe 7 performance gain associated with the removal of neutrals from the evaluation set emphasizes the importance of neutral words as a major source of sentiment extraction system errors 4moreover the boundary between sentimentbearing and neutral words in gih4 accountsfor 93 of disagreements between the labels assigned to adjectives in gih4 and hm by two in dependent teams of human annotatorsthe viewtaken here is that the vast majority of such inter annotator disagreements are not really errors but a reflection of the natural ambiguity of the words that are located on the periphery of the sentiment categorycentrality to the semantic category the approach to sentiment category as a fuzzyset ascribes the category of sentiment some spe cific structural propertiesfirst as opposed to thewords located on the periphery more central ele ments of the set usually have stronger and more numerous semantic relations with other categorymembers 5second the membership of these cen tral words in the category is less ambiguous than the membership of more peripheral wordsthus we can estimate the centrality of a word in a given category in two ways 1through the density of the words relationships with other words by enumerating its semantic ties to other words within the field and calculating membership scores based on the number of these ties and 2through the degree of word membership ambiguity by assessing the interannotator agreement on the word membership in this categorylexicographical entries in the dictionaries suchas wordnet seek to establish semantic equivalence between the word and its definition and provide a rich source of humanannotated relationships between the wordsby using a bootstrap ping system such as step that follows the links between the words in wordnet to find similarwords we can identify the paths connecting mem bers of a given semantic category in the dictionarywith multiple bootstrapping runs on different seed 4it is consistent with the observation by kim and hovy who noticed that when positives and neutrals were collapsed into the same category opposed to negatives the agreement between human annotators rose by 125the operationalizations of centrality derived from thenumber of connections between elements can be found in so cial network theory 212lists we can then produce a measure of the density of such tiesthe ambiguity measure de rived from interannotator disagreement can then be used to validate the results obtained from the densitybased method of determining centralityin order to produce a centrality measure we conducted multiple runs with nonintersecting seed lists drawn from hmthe lists of wordsfetched by step on different runs partially over lapped suggesting that the words identified by the system many times as bearing positive or negativesentiment are more central to the respective cate goriesthe number of times the word has been fetched by step runs is reflected in the gross overlap measure produced by the systeminsome cases there was a disagreement between dif ferent runs on the sentiment assigned to the wordsuch disagreements were addressed by comput ing the net overlap scores for each of the found words the total number of runs assigning the worda negative sentiment was subtracted from the to tal of the runs that consider it positivethus the greater the number of runs fetching the word and the greater the agreement be tween these runs on the assigned sentiment the higher the net overlap score of this wordthe net overlap scores obtained for each iden tified word were then used to stratify these wordsinto groups that reflect positive or negative dis tance of these words from the zero scorethe zero score was assigned to the wordnet adjectivesthat were not identified by step as bearing posi tive or negative sentiment 6 and to the words with equal number of positive and negative hits on several step runsthe performance measuresfor each of the groups were then computed to al low the comparison of step and human annotator performance on the words from the core and from the periphery of the sentiment categorythus foreach of the net overlap score groups both automatic and manual sentiment annota tions were compared to humanannotated gih4which was used as a gold standard in this experi menton 58 runs the system has identified 3 908english adjectives as positive 3 905 as nega tive while the remainder of wordnets 22 141 adjectives was deemed neutralof these 14 328 adjectives that step runs deemed neutral6the seed lists fed into step contained positive or neg ative but no neutral words since hm which was used as a source for these seed lists does not include any neutralsfigure 1 accuracy of word sentiment tagging884 were also found in gih4 andor hm lists which allowed us to evaluate step performance and hmgi agreement on the subset of neutrals as wellthe graph in figure 1 shows the distributionof adjectives by net overlap scores and the aver age accuracyagreement rate for each groupfigure 1 shows that the greater the net over lap score and hence the greater the distance of the word from the neutral subcategory the more accurate are step results and thegreater is the agreement between two teams of hu man annotators on average for all categories including neutrals the accuracy of step vs gih4 was 665 humanannotated hm had 787 accuracy vs gih4for the words with net overlap of 7 and greater both stepand hm had accuracy around 90the accu racy declined dramatically as net overlap scores approached zero in this categoryhumanannotated hm showed only 20 agree ment with gih4 while step which deemedthese words neutral rather than positive or neg ative performed with 57 accuracythese results suggest that the two measures ofword centrality net overlap score based on mul tiple step runs and the interannotator agreement are directly related 7thus the net overlap score can serve as a useful tool in the identification of core and peripheral membersof a fuzzy lexical category as well as in predic 7in our sample the coefficient of correlation between thetwo was 068the absolute net overlap score on the sub groups 0 to 10 was used in calculation of the coefficient of correlation213tion of interannotator agreement and system per formance on a subgroup of words characterized by a given net overlap score valuein order to make the net overlap score measure usable in sentiment tagging of texts and phrasesthe absolute values of this score should be normalized and mapped onto a standard 0 1 inter valsince the values of the net overlap score may vary depending on the number of runs used inthe experiment such mapping eliminates the vari ability in the score values introduced with changesin the number of runs performedin order to ac complish this normalization we used the value ofthe net overlap score as a parameter in the stan dard fuzzy membership sfunction this function maps the absolute values of the net overlap score onto the interval from 0 to 1 where 0 corresponds to the absence of membership in the category of sentiment and 1 reflects the highest degree of membership in this categorythe function can be defined as follows s 0 for you 22 foryou 122 for you 1 for you where you is the net overlap score for the word and are the three adjustable parameters is set to 1 is set to 15 and which represents a crossover point is defined as 2 8defined this way the sfunction assigns highest degree of membership to words that have the the net overlap score you 15the accuracy vs gih4 on this subset is 100the accuracy goes down as the degree of membership decreases and reaches 59 for values with the lowest degrees of membershipthis paper contributes to the development of nlp and semantic tagging systems in several respectsthe structure of the semantic category of sentimentthe analysis of the category of sentiment of english adjectives presented here suggests that this category is structured as a fuzzy set the distance from the coreof the category as measured by net over lap scores derived from multiple step runsis shown to affect both the level of interannotator agreement and the system perfor mance vs humanannotated gold standardthe list of sentimentbearing adjectivesthe list produced and crossvalidated by multiplestep runs contains 7 814 positive and negative english adjectives with an average ac curacy of 665 while the humanannotated list hm performed at 787 accuracy vs the gold standard 8the remaining14 328 adjectives were not identified as sen timent marked and therefore were considered neutralthe stratification of adjectives by their net overlap score can serve as an indicatorof their degree of membership in the cate gory of sentimentsince low degrees of membership are associated with greater ambiguity and interannotator disagreement the net overlap score valuecan provide researchers with a set of vol umeaccuracy tradeoffsfor example by including only the adjectives with the net overlap score of 4 and more the researchercan obtain a list of 1 828 positive and negative adjectives with accuracy of 81 vs gi h4 or 3 124 adjectives with 75 accuracy if the threshold is set at 3the normalization of the net overlap score values for the use inphrase and textlevel sentiment tagging systems was achieved using the fuzzy member ship function that we proposed here for the category of sentiment of english adjectivesfuture work in the direction laid out by thisstudy will concentrate on two aspects of sys tem developmentfirst further incremental improvements to the precision of the stepalgorithm will be made to increase the ac curacy of sentiment annotation through the use of adjectivenoun combinatorial patterns within glossessecond the resulting list of adjectives annotated with sentiment and withthe degree of word membership in the cate gory will be used in sentiment tagging of phrases and textsthis will enable us to compute the degree of importance of sentiment markers found in phrases and textsthe availability 8gih4 contains 1268 and hm list has 1336 positive andnegative adjectivesthe accuracy figures reported here in clude the errors produced at the boundary with neutrals214of the information on the degree of central ity of words to the category of sentiment mayimprove the performance of sentiment determination systems built to identify the senti ment of entire phrases or textssystem evaluation considerationsthe con tribution of this paper to the developmentof methodology of system evaluation is twofoldfirst this research emphasizes the i am portance of multiple runs on different seedlists for a more accurate evaluation of senti ment tag extraction system performancewehave shown how significantly the system re sults vary depending on the composition of the seed listsecond due to the high cost of manual an notation and other practical considerations most bootstrapping and other nlp systems are evaluated on relatively small manually annotated gold standards developed for agiven semantic categorythe implied assumption is that such a gold standard represents a random sample drawn from the pop ulation of all category members and hence system performance observed on this goldstandard can be projected to the whole se mantic categorysuch extrapolation is notjustified if the category is structured as a lex ical field with fuzzy boundaries in this casethe precision of both machine and human annotation is expected to fall when more peripheral members of the category are pro cessedin this paper the sentimentbearing words identified by the system were stratifiedbased on their net overlap score and evaluated in terms of accuracy of sentiment an notation within each stratumthese strata derived from net overlap scores reflect the degree of centrality of a given word to the semantic category and thus provide greater assurance that system performance on other words with the same net overlap score will be similar to the performance observed on the intersection of system results with the gold standardthe role of the interannotator disagree mentthe results of the study presented in this paper call for reconsideration of the roleof interannotator disagreement in the devel opment of lists of words manually annotated with semantic tagsit has been shown here that the interannotator agreement tends to fall as we proceed from the core of a fuzzysemantic category to its peripherytherefore the disagreement between the annota tors does not necessarily reflect a quality problem in human annotation but rather a structural property of the semantic categorythis suggests that interannotator disagree ment rates can serve as an important source of empirical information about the structural properties of the semantic category and canhelp define and validate fuzzy sets of seman tic category members for a number of nlp tasks and applications
E06-1027
mining wordnet for a fuzzy sentiment sentiment tag extraction from wordnet glossesmany of the tasks required for semantic tagging of phrases and texts rely on a list of words annotated with some semantic featureswe present a method for extracting sentimentbearing adjectives from wordnet using the sentiment tag extraction program we did 58 step runs on unique nonintersecting seed lists drawn from manually annotated list of positive and negative adjectives and evaluated the results against other manually annotated liststhe 58 runs were then collapsed into a single set of 7813 unique wordsfor each word we computed a net overlap score by subtracting the total number of runs assigning this word a negative sentiment from the total of the runs that consider it positivewe demonstrate that net overlap score can be used as a measure of the words degree of membership in the fuzzy category of sentiment the core adjectives which had the highest net overlap scores were identified most accurately both by step and by human annotators while the words on the periphery of the category had the lowest scores and were associated with low rates of interannotator agreementwe find that the performance of automatic annotation of subjectivity at the word level can be hurt by the presence of subjectivityambiguous words in the training setsnonneutral adjectives were extracted from wordnet and assigned fuzzy sentiment category membershipcentrality scores and tagswordnet synonyms antonyms and glosses are used to iteratively expand a list of seeds
cder efficient mt evaluation using block movements most stateoftheart evaluation measures for machine translation assign high costs to movements of word blocks in many cases though such movements still result in correct or almost correct sentences in this paper we will present a new evaluation measure which explicitly models block reordering as an edit operation our measure can be exactly calculated in quadratic time furthermore we will show how some evaluation measures can be improved research in machine translation depends heavily on the evaluation of its resultsespecially for the development of an mt system an evaluation measure is needed which reliably assesses the quality of mt outputsuch a measure will help analyze the strengths and weaknesses of different translation systems or different versions of the same system by comparing output at the sentence levelin most applications of mt understandability for humans in terms of readability as well as semantical correctness should be the evaluation criterionbut as human evaluation is tedious and costintensive automatic evaluation measures are used in most mt research tasksa high correlation between these automatic evaluation measures and human evaluation is thus desirablestateoftheart measures such as bleu or nist aim at measuring the translation quality rather on the document level1 than on the level of single sentencesthey are thus not wellsuited for sentencelevel evaluationthe introduction of smoothing solves this problem only partiallyin this paper we will present a new automatic error measure for mt the cder which is designed for assessing mt quality on the sentence levelit is based on edit distance such as the wellknown word error rate but allows for reordering of blocksnevertheless by defining reordering costs the ordering of the words in a sentence is still relevant for the measurein this the new measure differs significantly from the position independent error rate by generally finding an optimal solution for such a reordering problem is np hard as is shown in in previous work researchers have tried to reduce the complexity for example by restricting the possible permutations on the blocklevel or by approximation or heuristics during the calculationnevertheless most of the resulting algorithms still have high run times and are hardly applied in practice or give only a rough approximationan overview of some betterknown measures can be found in section 31in contrast to this our new measure can be calculated very efficientlythis is achieved by requiring complete and disjoint coverage of the blocks only for the reference sentence and not for the candidate translationwe will present an algorithm which computes the new error measure in quadratic timethe new evaluation measure will be investigated and compared to stateoftheart methods on two translation tasksthe correlation with human assessment will be measured for several different statistical mt systemswe will see that the new measure significantly outperforms the existing approachesas a further improvement we will introduce word dependent substitution coststhis method will be applicable to the new measure as well as to established measures like wer and perstarting from the observation that the substitution of a word with a similar one is likely to affect translation quality less than the substitution with a completely different word we will show how the similarity of words can be accounted for in automatic evaluation measuresthis paper is organized as follows in section 2 we will present the state of the art in mt evaluation and discuss the problem of block reorderingsection 3 will introduce the new error measure cder and will show how it can be calculated efficientlythe concept of worddependent substitution costs will be explained in section 4in section 5 experimental results on the correlation of human judgment with the cder and other wellknown evaluation measures will be presentedsection 6 will conclude the paper and give an outlook on possible future workin mt as opposed to other natural language processing tasks like speech recognition there is usually more than one correct outcome of a taskin many cases alternative translations of a sentence differ from each other mostly by the ordering of blocks of wordsconsequently an evaluation measure for mt should be able to detect and allow for block reorderingnevertheless a higher amount of reordering between a candidate translation and a reference translation should still be reflected in a worse evaluation scorein other words the more blocks there are to be reordered between reference and candidate sentence the higher we want the measure to evaluate the distance between these sentencesstateoftheart evaluation measures for mt penalize movement of blocks rather severely ngram based scores such as bleu or nist still yield a high unigram precision if blocks are reorderedfor higherorder ngrams though the precision dropsas a consequence this affects the overall score significantlywer which is based on levenshtein distance penalizes the reordering of blocks even more heavilyit measures the distance by substitution deletion and insertion operations for each word in a relocated blockper on the other hand ignores the ordering of the words in the sentences completelythis often leads to an overly optimistic assessment of translation qualitythe approach we pursue in this paper is to extend the levenshtein distance by an additional operation namely block movementthe number of blocks in a sentence is equal to the number of gaps among the blocks plus onethus the block movements can equivalently be expressed as long jump operations that jump over the gaps between two blocksthe costs of a long jump are constantthe blocks are read in the order of one of the sentencesthese long jumps are combined with the classical levenshtein edit operations namely insertion deletion substitution and the zerocost operation identitythe resulting long jump distance dlj gives the minimum number of operations which are necessary to transform the candidate sentence into the reference sentencelike the levenshtein distance the long jump distance can be depicted using an alignment grid as shown in figure 1 here each grid point corresponds to a pair of interword positions in candidate and reference sentence respectively dlj is the minimum cost of a path between the lower left and the upper right alignment grid point which covers all reference and candidate wordsdeletions and insertions correspond to horizontal and vertical edges respectivelysubstitutions and identity operations correspond to diagonal edgesedges between arbitrary grid points from the same row correspond to long jump operationsit is easy to see that dlj is symmetricalin the example the best path contains one deletion edge one substitution edge and three long jump edgestherefore the long jump distance between the sentences is fivein contrast the best levenshtein path contains one deletion edge four identity and five consecutive substitution edges the levenshtein distance between the two sentences is sixthe effect of reordering on the bleu measure is even higher in this example whereas 8 of the 10 unigrams from the candidate sentence can be found in the reference sentence this holds for only 4 bigrams and 1 trigramnot a single one of the 7 candidate fourgrams occurs in the reference sentence showed that finding an optimal path in a long jump alignment grid is an nphard problemour experiments showed that the calculation of exact long jump distances becomes impractical for sentences longer than 20 wordsa possible way to achieve polynomial runtime is to restrict the number of admissible block permutationsthis has been implemented by in the inversion word error ratealternatively a heuristic or approximative distance can be calculated as in gtm by an implementation of both approaches at the same time can be found in ter by in this paper we will present another approach which has a suitable runtime while still maintaining completeness of the calculated measurethe idea of the proposed method is to drop some restrictions on the alignment paththe long jump distance as well as the levenshtein distance require both reference and candidate translation to be covered completely and disjointlywhen extending the metric by block movements we drop this constraint for the candidate translationthat is only the words in the reference sentence have to be covered exactly once whereas those in the candidate sentence can be covered zero one or multiple timesdropping the constraints makes an efficient computation of the distance possiblewe drop the constraints for the candidate sentence and not for the reference sentence because we do not want any information contained in the reference to be omittedmoreover the reference translation will not contain unnecessary repetitions of blocksthe new measure which will be called cder in the following can thus be seen as a measure oriented towards recall while measures like bleu are guided by precisionthe cder is based on the cdcd distance2 introduced in the authors show there that the problem of finding the optimal solution can be solved in o time where i is the length of the candidate sentence and l the length of the reference sentencewithin this paper we will refer to this distance as dcd in the next subsection we will show how it can be computed in o time using a modification of the levenshtein algorithmwe also studied the reverse direction of the described measure that is we dropped the coverage constraints for the reference sentence instead of the candidate sentenceadditionally the maximum of both directions has been considered as distance measurethe results in section 52 will show that the measure using the originally proposed direction has a significantly higher correlation with human evaluation than the other directionsour algorithm for calculating dcd is based on the dynamic programming algorithm for the levenshtein distance the levenshtein distance dlev dlev and dlevconsequently the levenshtein distance can be calculated in time othis algorithm can easily be extended for the calculation of dcd as follows again we define an auxiliary quantity d as insertions deletions and substitutions are handled the same way as in the levenshtein algorithmnow assume that an optimal dcd path has been found then each long jump edge within 2c stands for cover and d for disjointwe adopted this notion for our measures this path will always start at a node with the lowest d value in its row3consequently we use the following modification of the levenshtein recursion where δ is the kronecker deltafigure 2 shows the possible predecessors of a grid pointthe calculation of d requires all values of d to be known even for i ithus the calculation takes three steps for each l i0 there is always an optimal dcd alignment path that does not contain any deletion edges because each deletion can be replaced by a long jump at the same coststhis is different for a dlj path because here each candidate word must be covered exactly onceassume now that the candidate sentence consists of i words and the reference sentence consists of l words with i l then at most l candidate words can be covered by substitution or identity edgestherefore the remaining candidate words must be covered by deletion edgesthis means that at least i l deletion edges will be found in any dlj path which leads to dlj dcd i l in this caseconsequently the length difference between the two sentences gives us a useful miscoverage penalty lplen this penalty is independent of the dcd alignment paththus an optimal dcd alignment path is optimal for dcd lplen as welltherefore the search algorithm in section 32 will find the optimum for this sumabsolute miscoverage let coverage be the number of substitution identity and deletion edges that cover a candidate word ei in a dcd pathif we had a complete and disjoint alignment for the candidate word coverage would be 1 for each iin general this is not the casewe can use the absolute miscoverage as a penalty lpmisc for dcd each of these steps can be done in time otherefore this algorithm calculates dcd in time o and space oas the cder does not penalize candidate translations which are too long we studied the use of a length penalty or miscoverage penaltythis determines the difference in sentence lengths between candidate and referencetwo definitions of such a penalty have been studied for this workthis miscoverage penalty is not independent of the alignment pathconsequently the proposed search algorithm will not necessarily find an optimal solution for the sum of dcd and lpmiscthe idea behind the absolute miscoverage is that one can construct a valid but not necessarily optimal dlj path from a given dcd paththis procedure is illustrated in figure 3 and takes place in two steps 1for each block of overcovered candidate words replace the aligned substitution andor identity edges by insertion edges move the long jump at the beginning of the block accordingly2for each block of undercovered candidate words add the corresponding number of deletion edges move the long jump at the beginning of the block accordinglythis also shows that there cannot be4 a polynomial time algorithm that calculates the minimum of dcd lpmisc for arbitrary pairs of sentences because this minimum is equal to dljwith these miscoverage penalties inexpensive lower and upper bounds for dlj can be calculated because the following inequality holds all automatic error measures which are based on the edit distance apply fixed costs for the substitution of wordshowever this is counterintuitive as replacing a word with another one which has a similar meaning will rarely change the meaning of a sentence significantlyon the other hand replacing the same word with a completely different one probably willtherefore it seems advisable to make substitution costs dependent on the semantical andor syntactical dissimilarity of the wordsto avoid awkward case distinctions we assume that a substitution cost function csub for two words e e meets the following requirements 3the costs of substituting a word e by e are always equal or lower than those of deleting e and then inserting ein short csub 2under these conditions the algorithms for wer and cder can easily be modified to use worddependent substitution costsfor example the only necessary modification in the cder algorithm in equation 1 is to replace 1 δ by csubfor the per it is no longer possible to use a linear time algorithm in the general caseinstead a modification of the hungarian algorithm can be usedthe question is now how to define the worddependent substitution costswe have studied two different approachesa pragmatic approach is to compare the spelling of the words to be substituted with each otherthe more similar the spelling is the more similar we consider the words to be and the lower we want the substitution costs between themin english this works well with similar tenses of the same verb or with genitives or plurals of the same nounnevertheless a similar spelling is no guarantee for a similar meaning because prefixes such as mis in or un can change the meaning of a word significantlyan obvious way of comparing the spelling is the levenshtein distancehere words are compared on character levelto normalize this distance into a range from 0 to 1 we divide the absolute distance by the length of the levenshtein alignment pathanother characterbased substitution cost function we studied is based on the common prefix length of both wordsin english different tenses of the same verb share the same prefix which is usually the stemthe same holds for different cases numbers and genders of most nouns and adjectiveshowever it does not hold if verb prefixes are changed or removedon the other hand the common prefix length is sensitive to critical prefixes such as mis for the same reasonconsequently the common prefix length normalized by the average length of both words gives a reasonable measure for the similarity of two wordsto transform the normalized common prefix length into costs this fraction is then subtracted from 1more sophisticated methods could be considered for worddependent substitution costs as wellexamples of such methods are the introduction of information weights as in the nist measure or the comparison of stems or synonyms as in meteor the different evaluation measures were assessed experimentally on data from the chineseenglish and the arabicenglish task of the nist 2004 evaluation workshop in this evaluation campaign 4460 and 1735 candidate translations respectively generated by different research mt systems were evaluated by human judges with regard to fluency and adequacyfour reference translations are provided for each candidate translationdetailed corpus statistics are listed in table 2for the experiments in this study the candidate translations from these tasks were evaluated using different automatic evaluation measurespearsons correlation coefficient r between automatic evaluation and the sum of fluency and adequacy was calculatedas it could be arguable whether pearsons r is meaningful for categorical data like human mt evaluation we have also calculated kendalls correlation coefficient t because of the high number of samples versus the low number of categories we calculated t separately for each source sentencethese experiments showed that kendalls t reflects the same tendencies as pearsons r regarding the ranking of the evaluation measuresbut only the latter allows for an efficient calculation of confidence intervalsconsequently figures of t are omitted in this paperdue to the small number of samples for evaluation on system level all correlation coefficients between automatic and human evaluation on system level are very close to 1therefore they do not show any significant differences for the different evaluation measuresadditional experiments on data from the nist 2002 and 2003 workshops and from the iwslt 2004 evaluation workshop confirm the findings from the nist 2004 experiments for the sake of clarity they are not included hereall correlation coefficients presented here were calculated for sentence level evaluationfor comparison with stateoftheart evaluation measures we have also calculated the correlation between human evaluation and wer and bleu which were both measures of choice in several international mt evaluation campaignsfurthermore we included ter as a recent heuristic block movement measure in some of our experiments for comparison with our measureas the bleu score is unsuitable for sentence level evaluation in its original definition bleus smoothing as described by is performedadditionally we added sentence boundary symbols for bleu and a different reference length calculation scheme for ter because these changes improved the correlation between human evaluation and the two automatic measuresdetails on this have been described in table 3 presents the correlation of bleu wer and cder with human assessmentit can be seen that cder shows better correlation than bleu and wer on both corporaon the chineseenglish task the smoothed bleu score has a higher sentencelevel correlation than werhowever this is not the case for the arabic english taskso none of these two measures is superior to the other one but they are both outperformed by cderif the direction of cder is reversed the correlation with human evaluation is much loweradditionally we studied the use of the maximum of the distances in both directionsthis has a lower correlation than taking the original cder as table 3 showsnevertheless the maximum still performs slightly better than bleu and werthe problem of how to avoid a preference of overly long candidate sentences by cder remains unsolved as can be found in table 4 each of the proposed penalties infers a significant decrease of correlation between the cder and human evaluationfuture research will aim at finding a suitable length penaltyespecially if cder is applied in system development such a penalty will be needed as preliminary optimization experiments have shownwer the correlation with human judgment is increased by about 2 absolute on both language pairsthe levenshteinbased substitution costs are better suited for wer than the scheme based on common prefix lengthfor cder there is hardly any difference between the two methodsexperiments on five more corpora did not give any significant evidence which of the two substitution costs correlates better with human evaluationbut as the prefixbased substitution costs improved correlation more consistently across all corpora we employed this method in our next experimentan interesting topic in mt evaluation research is the question whether a linear combination of two mt evaluation measures can improve the correlation between automatic and human evaluationparticularly we expected the combination of cder and per to have a significantly higher correlation with human evaluation than the measures alonecder has the ability to reward correct local ordering whereas per penalizes overly long candidate sentencesthe two measures were combined with linear interpolationin order to determine the weights we performed data analysis on seven different corporathe result was consistent across all different data collections and language pairs a linear combination of about 60 cder and 40 per has a significantly higher correlation with human evaluation than each of the measures alonefor the two corpora studied here the results of the combination can be found in table 6 on the chineseenglish task there is an additional gain of more than 1 absolute in correlation over cder alonethe combined error measure is the best method in both casesthe last line in table 6 shows the 95confidence interval for the correlationwe see that the new measure cder combined with per has a significantly higher correlation with human evaluation than the existing measures bleu ter and wer on both corporawe presented cder a new automatic evaluation measure for mt which is based on edit distance extended by block movementscder allows for reordering blocks of words at constant costunlike previous block movement measures cder can be exactly calculated in quadratic timeexperimental evaluation on two different translation tasks shows a significantly improved correlation with human judgment in comparison with stateoftheart measures such as bleuadditionally we showed how worddependent substitution costs can be applied to enhance the new error measure as well as existing approachesthe highest correlation with human assessment was achieved through linear interpolation of the new cder with perfuture work will aim at finding a suitable length penalty for cderin addition more sophisticated definitions of the worddependent substitution costs will be investigatedfurthermore it will be interesting to see how this new error measure affects system development we expect it to allow for a better sentencewise error analysisfor system optimization preliminary experiments have shown the need for a suitable length penaltythis material is partly based upon work supported by the defense advanced research projects agency under contract nohr001106c0023 and was partly funded by the european union under the integrated project tcstar technology and corpora for speech to speech translation
E06-1031
cder efficient mt evaluation using block movementsmost stateoftheart evaluation measures for machine translation assign high costs to movements of word blocksin many cases though such movements still result in correct or almost correct sentencesin this paper we will present a new evaluation measure which explicitly models block reordering as an edit operationour measure can be exactly calculated in quadratic timewe consider edit distance for word substitution and reorderingour cder measure is based on edit distance such as the wellknown wer but allows reordering of blocks
reevaluation the role of bleu in machine translation research we argue that the machine translation community is overly reliant on the bleu machine translation evaluation metric we show that an improved bleu score is neither necessary nor sufficient for achieving an actual improvement in translation quality and give two significant counterexamples to bleus correlation with human judgments of quality this offers new potential for research which was previously deemed unpromising by an inability to improve upon bleu scores over the past five years progress in machine translation and to a lesser extent progress in natural language generation tasks such as summarization has been driven by optimizing against ngrambased evaluation metrics such as bleu the statistical machine translation community relies on the bleu metric for the purposes of evaluating incremental system changes and optimizing systems through minimum error rate training conference papers routinely claim improvements in translation quality by reporting improved bleu scores while neglecting to show any actual example translationsworkshops commonly compare systems using bleu scores often without confirming these rankings through manual evaluationall these uses of bleu are predicated on the assumption that it correlates with human judgments of translation quality which has been shown to hold in many cases however there is a question as to whether minimizing the error rate with respect to bleu does indeed guarantee genuine translation improvementsif bleus correlation with human judgments has been overestimated then the field needs to ask itself whether it should continue to be driven by bleu to the extent that it currently isin this paper we give a number of counterexamples for bleus correlation with human judgmentswe show that under some circumstances an improvement in bleu is not sufficient to reflect a genuine improvement in translation quality and in other circumstances that it is not necessary to improve bleu in order to achieve a noticeable improvement in translation qualitywe argue that bleu is insufficient by showing that bleu admits a huge amount of variation for identically scored hypothesestypically there are millions of variations on a hypothesis translation that receive the same bleu scorebecause not all these variations are equally grammatically or semantically plausible there are translations which have the same bleu score but a worse human evaluationwe further illustrate that in practice a higher bleu score is not necessarily indicative of better translation quality by giving two substantial examples of bleu vastly underestimating the translation quality of systemsfinally we discuss appropriate uses for bleu and suggest that for some research projects it may be preferable to use a focused manual evaluation insteadthe rationale behind the development of bleu is that human evaluation of machine translation can be time consuming and expensivean automatic evaluation metric on the other hand can be used for frequent tasks like monitoring incremental system changes during development which are seemingly infeasible in a manual evaluation settingthe way that bleu and other automatic evaluation metrics work is to compare the output of a machine translation system against reference human translationsmachine translation evaluation metrics differ from other metrics that use a reference like the word error rate metric that is used in speech recognition because translations have a degree of variation in terms of word choice and in terms of variant ordering of some phrasesbleu attempts to capture allowable variation in word choice through the use of multiple reference translations in order to overcome the problem of variation in phrase order bleu uses modified ngram precision instead of wers more strict string edit distancebleus ngram precision is modified to eliminate repetitions that occur across sentencesfor example even though the bigram to miami is repeated across all four reference translations in table 1 it is counted only once in a hypothesis translationtable 2 shows the ngram sets created from the reference translationspapineni et al calculate their modified precision score pn for each ngram length by summing over the matches for every hypothesis sentence s in the complete corpus c as counting punctuation marks as separate tokens the hypothesis translation given in table 1 has 15 unigram matches 10 bigram matches 5 trigram matches and three 4gram matches the hypothesis translation contains a total of 18 unigrams 17 bigrams 16 trigrams and 15 4gramsif the complete corpus consisted of this single sentence peared as being calm carry escorted he him in led plane quite seemed take that the to to to was was which while will would2grams american plane florida miami miami in orejuela appeared orejuela seemed appeared calm as he being escorted being led calm as calm while carry him escorted to he was him to in florida led to plane that plane which quite calm seemed quite take him that was that would the american the plane to miami to carry to the was being was led was to which will while being will take would take florida 3grams american plane that american plane which miami florida miami in florida orejuela appeared calm orejuela seemed quite appeared calm as appeared calm while as he was being escorted to being led to calm as he calm while being carry him to escorted to the he was being he was led him to miami in florida led to the plane that was plane that would plane which will quite calm as seemed quite calm take him to that was to that would take the american plane the plane that to miami to miami in to carry him to the american to the plane was being led was led to was to carry which will take while being escorted will take him would take him florida then the modified precisions would be p1 83 p2 59 p3 31 and p4 2each pn is combined and can be weighted by specifying a weight wnin practice each pn is generally assigned an equal weightbecause bleu is precision based and because recall is difficult to formulate over multiple reference translations a brevity penalty is introduced to compensate for the possibility of proposing highprecision hypothesis translations which are too shortthe brevity penalty is calculated as where c is the length of the corpus of hypothesis translations and r is the effective reference corpus length1 thus the bleu score is calculated as a bleu score can range from 0 to 1 where higher scores indicate closer matches to the reference translations and where a score of 1 is assigned to a hypothesis translation which exactly 1the effective reference corpus length is calculated as the sum of the single reference translation from each set which is closest to the hypothesis translationorejuela appeared calm as he was led to the american plane which will take him to miami floridaorejuela appeared calm while being escorted to the plane that would take him to miami floridaorejuela appeared calm as he was being led to the american plane that was to carry him to miami in floridaorejuela seemed quite calm as he was being led to the american plane that would take him to miami in florida matches one of the reference translationsa score of 1 is also assigned to a hypothesis translation which has matches for all its ngrams in the clipped reference ngrams and which has no brevity penaltythe primary reason that bleu is viewed as a useful standin for manual evaluation is that it has been shown to correlate with human judgments of translation qualitypapineni et al showed that bleu correlated with human judgments in its rankings of five chinesetoenglish machine translation systems and in its ability to distinguish between human and machine translationsbleus correlation with human judgments has been further tested in the annual nist machine translation evaluation exercise wherein bleus rankings of arabictoenglish and chinesetoenglish systems is verified by manual evaluationin the next section we discuss theoretical reasons why bleu may not always correlate with human judgmentswhile bleu attempts to capture allowable variation in translation it goes much further than it shouldin order to allow some amount of variant order in phrases bleu places no explicit constraints on the order that matching ngrams occur into allow variation in word choice in translation bleu uses multiple reference translations but puts very few constraints on how ngram matches can be drawn from the multiple reference translationsbecause bleu is underconstrained in these ways it allows a tremendous amount of variation far beyond what could reasonably be considered acceptable variation in translationin this section we examine various permutations and substitutions allowed by bleuwe show that for an average hypothesis translation there are millions of possible variants that would each receive a similar bleu scorewe argue that because the number of translations that score the same is so large it is unlikely that all of them will be judged to be identical in quality by human annotatorsthis means that it is possible to have items which receive identical bleu scores but are judged by humans to be worseit is also therefore possible to have a higher bleu score without any genuine improvement in translation qualityin sections 31 and 32 we examine ways of synthetically producing such variant translationsone way in which variation can be introduced is by permuting phrases within a hypothesis translationa simple way of estimating a lower bound on the number of ways that phrases in a hypothesis translation can be reordered is to examine bigram mismatchesphrases that are bracketed by these bigram mismatch sites can be freely permuted because reordering a hypothesis translation at these points will not reduce the number of matching ngrams and thus will not reduce the overall bleu scorehere we denote bigram mismatches for the hypothesis translation given in table 1 with vertical bars appeared calm when he was taken to the american plane which will to miami floridawe can randomly produce other hypothesis translations that have the same bleu score but are radically different from each otherbecause bleu only takes order into account through rewarding matches of higher order ngrams a hypothesis sentence may be freely permuted around these bigram mismatch sites and without reducing the bleu scorethus which will he was when taken appeared calm to the american plane to miami florida receives an identical score to the hypothesis translation in table 1if b is the number of bigram matches in a hypothesis translation and k is its length then there are possible ways to generate similarly scored items using only the words in the hypothesis translation2 thus for the example hypothesis translation there are at least 40320 different ways of permuting the sentence and receiving a similar bleu scorethe number of permutations varies with respect to sentence length and number of bigram mismatchestherefore as a hypothesis translation approaches being an identical match to one of the reference translations the amount of variance decreases significantlyso as translations improve spurious variation goes downhowever at todays levels the amount of variation that bleu admits is unacceptably highfigure 1 gives a scatterplot of each of the hypothesis translations produced by the second best bleu system from the 2005 nist mt evaluationthe number of possible permutations for some translations is greater than 1073in addition to the factorial number of ways that similarly scored bleu items can be generated by permuting phrases around bigram mismatch points additional variation may be synthesized by drawing different items from the reference ngramsfor example since the hypothesis translation from table 1 has a length of 18 with 15 unigram matches 10 bigram matches 5 trigram matches and three 4gram matches we can artificially construct an identically scored hypothesis by drawing an identical number of matching ngrams from the reference translationstherefore the far less plausible was being led to the calm as he was would take carry him seemed quite when taken would receive the same bleu score as the hypothesis translation from table 1 even though human judges would assign it a much lower scorethis problem is made worse by the fact that bleu equally weights all items in the reference sentences therefore omitting contentbearing lexical items does not carry a greater penalty than omitting function wordsthe problem is further exacerbated by bleu not having any facilities for matching synonyms or lexical variantstherefore words in the hypothesis that did not appear in the references can be substituted with arbitrary words because they do not contribute towards the bleu scoreunder bleu we could just as validly use the words black and helicopters as we could when and takenthe lack of recall combined with naive token identity means that there can be overlap between similar items in the multiple reference translationsfor example we can produce a translation which contains both the words carry and take even though they arise from the same source wordthe chance of problems of this sort being introduced increases as we add more reference translationsbleus inability to distinguish between randomly generated variations in translation hints that it may not correlate with human judgments of translation quality in some casesas the number of identically scored variants goes up the likelihood that they would all be judged equally plausible goes downthis is a theoretical point and while the variants are artificially constructed it does highlight the fact that bleu is quite a crude measurement of translation qualitya number of prominent factors contribute to bleus crudeness each of these failures contributes to an increased amount of inappropriately indistinguishable translations in the analysis presented abovegiven that bleu can theoretically assign equal scoring to translations of obvious different quality it is logical that a higher bleu score may not how much of the meaning expressed in the reference translation is also expressed in the hypothesis translationreference iran had already announced kharazi would boycott the conference after jordans king abdullah ii accused iran of meddling in iraqs affairs necessarily be indicative of a genuine improvement in translation qualitythis begs the question as to whether this is only a theoretical concern or whether bleus inadequacies can come into play in practicein the next section we give two significant examples that show that bleu can indeed fail to correlate with human judgments in practicethe nist machine translation evaluation exercise has run annually for the past five years as part of darpas tides programthe quality of chinesetoenglish and arabictoenglish translation systems is evaluated both by using bleu score and by conducting a manual evaluationas such the nist mt eval provides an excellent source of data that allows bleus correlation with human judgments to be verifiedlast years evaluation exercise was startling in that bleus rankings of the arabicenglish translation systems failed to fully correspond to the manual evaluationin particular the entry that was ranked 1st in the human evaluation was ranked 6th by bleuin this section we examine bleus failure to correctly rank this entrythe manual evaluation conducted for the nist mt eval is done by english speakers without reference to the original arabic or chinese documentstwo judges assigned each sentence in table 4 two hypothesis translations with similar bleu scores but different human scores and one of four reference translations the hypothesis translations a subjective 15 score along two axes adequacy and fluency table 3 gives the interpretations of the scoreswhen first evaluating fluency the judges are shown only the hypothesis translationthey are then shown a reference translation and are asked to judge the adequacy of the hypothesis sentencestable 4 gives a comparison between the output of the system that was ranked 2nd by bleu3 and of the entry that was ranked 6th in bleu but 1st in the human evaluation the example is interesting because the number of matching ngrams for the two hypothesis translations is roughly similar but the human scores are quite differentthe first hypothesis is less adequate because it fails to indicated that kharazi is boycotting the conference and because it inserts the word stood before accused which makes the abdullahs actions less clearthe second hypothesis contains all of the information of the reference but uses some synonyms and paraphrases which would not picked up on by bleu will not attend for would boycott and interfering for meddling ments of fluency with r2 0002 when the outlier entry is included figures 2 and 3 plot the average human score for each of the seven nist entries against its bleu scoreit is notable that one entry received a much higher human score than would be anticipated from its low bleu scorethe offending entry was unusual in that it was not fully automatic machine translation instead the entry was aided by monolingual english speakers selecting among alternative automatic translations of phrases in the arabic source sentences and postediting the result the remaining six entries were all fully automatic machine translation systems in fact they were all phrasebased statistical machine translation system that had been trained on the same parallel corpus and most used bleubased minimum error rate training to optimize the weights of their log linear models feature functions this opens the possibility that in order for bleu to be valid only sufficiently similar systems should be compared with one anotherfor instance when measuring correlation using pearsons we get a very low correlation of r2 014 when the outlier in figure 2 is included but a strong r2 087 when it is excludedsimilarly figure 3 goes from r2 0002 to a much stronger r2 0742systems which explore different areas of translation space may produce output which has differing characteristics and might end up in different regions of the human scores bleu score graphwe investigated this by performing a manual evaluation comparing the output of two statistical machine translation systems with a rulebased machine translation and seeing whether bleu correctly ranked the systemswe used systran for the rulebased system and used the frenchenglish portion of the europarl corpus to train the smt systems and to evaluate all three systemswe built the first phrasebased smt system with the complete set of europarl data and optimized its feature functions using minimum error rate training in the standard way we evaluated it and the systran system with bleu using a set of 2000 held out sentence pairs using the same normalization and tokenization schemes on both systems outputwe then built a number of smt systems with various portions of the training corpus and selected one that was trained with 1 of the data which had a bleu score that was close to but still higher than that for the rulebased systemwe then performed a manual evaluation where we had three judges assign fluency and adequacy ratings for the english translations of 300 french sentences for each of the three systemsthese scores are plotted against the systems bleu scores in figure 4the graph shows that the bleu score for the rulebased system vastly underestimates its actual qualitythis serves as another significant counterexample to bleus correlation with human judgments of translation quality and further increases the concern that bleu may not be appropriate for comparing systems which employ different translation strategiesa number of projects in the past have looked into ways of extending and improving the bleu metricdoddington suggested changing bleus weighted geometric average of ngram matches to an arithmetic average and calculating the brevity penalty in a slightly different mannerhovy and ravichandra suggested increasing bleus sensitivity to inappropriate phrase movement by matching partofspeech tag sequences against reference translations in addition to bleus ngram matchesbabych and hartley extend bleu by adding frequency weighting to lexical items through tfidf as a way of placing greater emphasis on contentbearing words and phrasestwo alternative automatic translation evaluation metrics do a much better job at incorporating recall than bleu doesmelamed et al formulate a metric which measures translation accuracy in terms of precision and recall directly rather than precision and a brevity penaltybanerjee and lavie introduce the meteor metric which also incorporates recall on the unigram level and further provides facilities incorporating stemming and wordnet synonyms as a more flexible matchlin and hovy as well as soricut and brill present ways of extending the notion of ngram cooccurrence statistics over multiple references such as those used in bleu to other natural language generation tasks such as summarizationboth these approaches potentially suffer from the same weaknesses that bleu has in machine translation evaluationcoughlin performs a largescale investigation of bleus correlation with human judgments and finds one example that fails to correlateher future work section suggests that she has preliminary evidence that statistical machine translation systems receive a higher bleu score than their nonngrambased counterpartsin this paper we have shown theoretical and practical evidence that bleu may not correlate with human judgment to the degree that it is currently believed to dowe have shown that bleus rather coarse model of allowable variation in translation can mean that an improved bleu score is not sufficient to reflect a genuine improvement in translation qualitywe have further shown that it is not necessary to receive a higher bleu score in order to be judged to have better translation quality by human subjects as illustrated in the 2005 nist machine translation evaluation and our experiment manually evaluating systran and smt translationswhat conclusions can we draw from thisshould we give up on using bleu entirelywe think that the advantages of bleu are still very strong automatic evaluation metrics are inexpensive and do allow many tasks to be performed that would otherwise be impossiblethe important thing therefore is to recognize which uses of bleu are appropriate and which uses are notappropriate uses for bleu include tracking broad incremental changes to a single system comparing systems which employ similar translation strategies and using bleu as an objective function to optimize the values of parameters such as feature weights in log linear translation models until a better metric has been proposedinappropriate uses for bleu include comparing systems which employ radically different strategies trying to detect improvements for aspects of translation that are not modeled well by bleu and monitoring improvements that occur infrequently within a test corpusthese comments do not apply solely to bleumeteor precision and recall and other such automatic metrics may also be affected to a greater or lesser degree because they are all quite rough measures of translation similarity and have inexact models of allowable variation in translationfinally that the fact that bleus correlation with human judgments has been drawn into question may warrant a reexamination of past work which failed to show improvements in bleufor example work which failed to detect improvements in translation quality with the integration of word sense disambiguation or work which attempted to integrate syntactic information but which failed to improve bleu may deserve a second look with a more targeted manual evaluationthe authors are grateful to amittai axelrod frank keller beata kouchnir jean senellart and matthew stone for their feedback on drafts of this paper and to systran for providing translations of the europarl test set
E06-1032
reevaluation the role of bleu in machine translation researchwe argue that the machine translation community is overly reliant on the bleu machine translation evaluation metricwe show that an improved bleu score is neither necessary nor sufficient for achieving an actual improvement in translation quality and give two significant counterexamples to bleu correlation with human judgments of qualitythis offers new potential for research which was previously deemed unpromising by an inability to improve upon bleu scoresthe problems of blue include synonyms and paraphrases are only handled if they are in the set of multiple reference translations available the scores for words are equally weighted so missing out on contentbearing material brings no additional penalty the brevity penalty is a stopgap measure to compensate for the fairly serious problem of not being able to calculate recallblue has certain shortcomings for comparing different machine translation systems especially if comparing conceptually different systems eg phrasebased versus rulebased systemswe find that bleu and nist favour ngrambased mt models such as pharaoh so the translations produced by rulebased systems score lower on the automatic evaluation even though human judges consistently rate their output higher than pharaoh translation
discriminative sentence compression with soft syntactic evidence we present a model for sentence compression that uses a discriminative largemargin learning framework coupled with a novel feature set defined on compressed bigrams as well as deep syntactic representations provided by auxiliary dependency and phrasestructure parsers the parsers are trained outofdomain and contain a significant amount of noise we argue that the discriminative nature of the learning algorithm allows the model to learn weights relative to any noise in the feature set to optimize compression accuracy directly this differs from current stateoftheart models that treat noisy parse trees for both compressed and uncompressed sentences as gold standard when calculating model parameters the ability to compress sentences grammatically with minimal information loss is an important problem in text summarizationmost summarization systems are evaluated on the amount of relevant information retained as well as their compression ratethus returning highly compressed yet informative sentences allows summarization systems to return larger sets of sentences and increase the overall amount of information extractedwe focus on the particular instantiation of sentence compression when the goal is to produce the compressed version solely by removing words or phrases from the original which is the most common setting in the literature in this framework the goal is to find the shortest substring of the original sentence that conveys the most important aspects of the meaningwe will work in a supervised learning setting and assume as input a training set t t1 of original sentences xt and their compressions ytwe use the ziffdavis corpus which is a set of 1087 pairs of sentencecompression pairsfurthermore we use the same 32 testing examples from knight and marcu and the rest for training except that we hold out 20 sentences for the purpose of developmenta handful of sentences occur twice but with different compressionswe randomly select a single compression for each unique sentence in order to create an unambiguous training setexamples from this data set are given in figure 1formally sentence compression aims to shorten a sentence x x1 xn into a substring y y1 ym where yi e x1 xnwe define the function i e 1 n that maps word yi in the compression to the index of the word in the original sentencefinally we include the constraint i i which forces each word in x to occur at most once in the compression y compressions are evaluated on three criteria typically grammaticality and importance are traded off with compression ratethe longer our compressions the less likely we are to remove important words or phrases crucial to maintaining grammaticality and the intended meaningthe paper is organized as follows section 2 discusses previous approaches to sentence compressionin particular we discuss the advantages and disadvantages of the models of knight and marcu in section 3 we present our discriminative largemargin model for sentence compression including the learning framework and an efficient decoding algorithm for searching the space of compressionswe also show how to extract a rich feature set that includes surfacelevel bigram features of the compressed sentence dropped words and phrases from the original sentence and features over noisy dependency and phrasestructure trees for the original sentencewe argue that this rich feature set allows the model to learn which words and phrases should be dropped and which should remain in the compressionsection 4 presents an experimental evaluation of our model compared to the models of knight and marcu and finally section 5 discusses some areas of future workknight and marcu first tackled this problem by presenting a generative noisychannel model and a discriminative treetotree decision tree modelthe noisychannel model defines the problem as finding the compressed sentence with maximum conditional probability p is the source model which is a pcfg plus bigram language modelp is the channel model the probability that the long sentence is an expansion of the compressed sentenceto calculate the channel model both the original and compressed versions of every sentence in the training set are assigned a phrasestructure treegiven a tree for a long sentence x and compressed sentence y the channel probability is the product of the probability for each transformation required if the tree for y is to expand to the tree for xthe treetotree decision tree model looks to rewrite the tree for x into a tree for ythe model uses a shiftreducedrop parsing algorithm that starts with the sequence of words in x and the corresponding treethe algorithm then either shifts reduces or drops on each step of the algorithma decision tree model is trained on a set of indicative features for each type of action in the parserthese models are then combined in a greedy global search algorithm to find a single compressionthough both models of knight and marcu perform quite well they do have their shortcomingsthe noisychannel model uses a source model that is trained on uncompressed sentences even though the source model is meant to represent the probability of compressed sentencesthe channel model requires aligned parse trees for both compressed and uncompressed sentences in the training set in order to calculate probability estimatesthese parses are provided from a parsing model trained on outofdomain data which can result in parse trees with many mistakes for both the original and compressed versionsthis makes alignment difficult and the channel probability estimates unreliable as a resulton the other hand the decision tree model does not rely on the trees to align and instead simply learns a treetotree transformation model to compress sentencesthe primary problem with this model is that most of the model features encode properties related to including or dropping constituents from the tree with no encoding of bigram or trigram surface features to promote grammaticalityas a result the model will sometimes return very short and ungrammatical compressionsboth models rely heavily on the output of a noisy parser to calculate probability estimates for the compressionwe argue in the next section that ideally parse trees should be treated solely as a source of evidence when making compression decisions to be balanced with other evidence such as that provided by the words themselvesrecently turner and charniak presented supervised and semisupervised versions of the knight and marcu noisychannel modelthe resulting systems typically return informative and grammatical sentences however they do so at the cost of compression rateriezler et al present a discriminative sentence compressor over the output of an lfg parser that is a packed representation of possible compressionsthough this model is highly likely to return grammatical compressions it required the training data be human annotated with syntactic treesfor the rest of the paper we use x x1 xn to indicate an uncompressed sentence and y y1 ym a compressed version of x ie each yj indicates the position in x of the jth word in the compressionwe always pad the sentence with dummy start and end words x1 start and xn end which are always included in the compressed version in this section we described a discriminative online learning approach to sentence compression the core of which is a decoding algorithm that searches the entire space of compressionslet the score of a compression y for a sentence x as in particular we are going to factor this score using a firstorder markov assumption on the words in the compressed sentence finally we define the score function to be the dot product between a high dimensional feature representation and a corresponding weight vector note that this factorization will allow us to define features over two adjacent words in the compression as well as the words inbetween that were dropped from the original sentence to create the compressionwe will show in section 32 how this factorization also allows us to include features on dropped phrases and subtrees from both a dependency and a phrasestructure parse of the original sentencenote that these features are meant to capture the same information in both the source and channel models of knight and marcu however here they are merely treated as evidence for the discriminative learner which will set the weight of each feature relative to the other features to optimize the models accuracy on the observed datawe define a dynamic programming table ci which represents the highest score for any compression that ends at word xi for sentence xwe define a recurrence as follows it is easy to show that cn represents the score of the best compression for sentence x under the firstorder score factorization we madewe can show this by inductionif we assume that cj is the highest scoring compression that ends at word xj for all j i then ci must also be the highest scoring compression ending at word xi since it represents the max combination over all high scoring shorter compressions plus the score of extending the compression to the current wordthus since xn is by definition in every compressed version of x then it must be the case that cn stores the score of the best compressionthis table can be filled in othis algorithm is really an extension of viterbi to the case when scores factor over dynamic substrings of the text as such we can use backpointers to reconstruct the highest scoring compression as well as kbest decoding algorithmsthis decoding algorithm is dynamic with respect to compression ratethat is the algorithm will return the highest scoring compression regardless of lengththis may seem problematic since longer compressions might contribute more to the score and thus be preferredhowever in section 32 we define a rich feature set including features on words dropped from the compression that will help disfavor compressions that drop very few words since this is rarely seen in the training datain fact it turns out that our learned compressions have a compression rate very similar to the gold standardthat said there are some instances when a static compression rate is preferreda user may specifically want a 25 compression rate for all sentencesthis is not a problem for our decoding algorithmwe simply augment the dynamic programming table and calculate cir which is the score of the best compression of length r that ends at word xithis table can be filled in as follows thus if we require a specific compression rate we simple determine the number of words r that satisfy this rate and calculate cnrthe new complexity is oso far we have defined the score of a compression as well as a decoding algorithm that searches the entire space of compressions to find the one with highest scorethis all relies on a score factorization over adjacent words in the compression s i w f iin section 33 we describe an online largemargin method for learning w here we present the feature representation f i for a pair of adjacent words in the compressionthese features were tuned on a development data setthe first set of features are over adjacent words yj1 and yj in the compressionthese include the partofspeech bigrams for the pair the pos of each word individually and the pos context of the most recent word being added to the compression yjthese features are meant to indicate likely words to include in the compression as well as some level of grammaticality eg the adjacent pos features jjvb would get a low weight since we rarely see an adjective followed by a verbwe also add a feature indicating if yj1 and yj were actually adjacent in the original sentence or not and we conjoin this feature with the above pos featuresnote that we have not included any lexical featureswe found during experiments on the development data that lexical information was too sparse and led to overfitting so we rarely include such featuresinstead we rely on the accuracy of pos tags to provide enough evidencenext we added features over every dropped word in the original sentence between yj1 and yj if there were anythese include the pos of each dropped word the pos of the dropped words conjoined with the pos of yj1 and yjif the dropped word is a verb we add a feature indicating the actual verb finally we add the pos context of each dropped wordthese features represent common characteristics of words that can or should be dropped from the original sentence in the compressed version we also add a feature indicating whether the dropped word is a negation we also have a set of features to represent brackets in the text which are common in the data setthe first measures if all the dropped words between yj1 and yj have a mismatched or inconsistent bracketingthe second measures if the left and rightmost dropped words are themselves both bracketsthese features come in handy for examples like the associated press reported the story where the compressed version is the associated press reported the storyinformation within brackets is often redundantthe previous set of features are meant to encode common pos contexts that are commonly retained or dropped from the original sentence during compressionhowever they do so without a larger picture of the function of each word in the sentencefor instance dropping verbs is not that uncommon a relative clause for instance may be dropped during compressionhowever dropping the main verb in the sentence is uncommon since that verb and its arguments typically encode most of the information being conveyedan obvious solution to this problem is to include features over a deep syntactic analysis of the sentenceto do this we parse every sentence twice once with a dependency parser and once with a phrasestructure parser these parsers have been trained outofdomain on the penn wsj treebank and as a result contain noisehowever we are merely going to use them as an additional source of featureswe call this soft syntactic evidence since the deep trees are not used as a strict goldstandard in our model but just as more evidence for or against particular compressionsthe learning algorithm will set the feature weight accordingly depending on each features discriminative powerit is not unique to use soft syntactic features in this way as it has been done for many problems in language processinghowever we stress this aspect of our model due to the history of compression systems using syntax to provide hard structural constraints on the outputlet us consider the sentence x mary saw ralph on tuesday after lunch with corresponding parses given in figure 2in particular let us consider the feature representation fthat is the feature representation of making ralph and after adjacent in the compression and dropping the prepositional phrase on tuesdaythe first set of features we consider are over dependency treesfor every dropped word we add a feature indicating the pos of the words parent in the treefor example if the dropped words parent is root then it typically means it is the main verb of the sentence and unlikely to be droppedwe also add a conjunction feature of the pos tag of the word being dropped and the pos of its parent as well as a feature indicating for each word being dropped whether it is a leaf node in the treewe also add the same features for the two adjacent words but indicating that they are part of the compressionfor the phrasestructure features we find every node in the tree that subsumes a piece of dropped text and is not a child of a similar nodein this case the pp governing on tuesdaywe then add features indicating the context from which this node was droppedfor example we add a feature specifying that a pp was dropped which was the child of a vpwe also add a feature indicating that a pp was dropped which was the left sibling of another pp etcideally for each production in the tree we would like to add a feature indicating every node that was dropped egvpvbd np pp pp vpvbd np pphowever we cannot necessarily calculate this feature since the extent of the production might be well beyond the local context of firstorder feature factorizationfurthermore since the training set is so small these features are likely to be observed very few timesin this section we have described a rich feature set over adjacent words in the compressed sentence dropped words and phrases from the original sentence and properties of deep syntactic trees of the original sentencenote that these features in many ways mimic the information already present in the noisychannel and decisiontree models of knight and marcu our bigram features encode properties that indicate both good and bad words to be adjacent in the compressed sentencethis is similar in purpose to the source model from the noisychannel systemhowever in that system the source model is trained on uncompressed sentences and thus is not as representative of likely bigram features for compressed sentences which is really what we desireour feature set also encodes dropped words and phrases through the properties of the words themselves and through properties of their syntactic relation to the rest of the sentence in a parse treethese features represent likely phrases to be dropped in the compression and are thus similar in nature to the channel model in the noisychannel system as well as the features in the treetotree decision tree systemhowever we use these syntactic constraints as soft evidence in our modelthat is they represent just another layer of evidence to be considered during training when setting parametersthus if the parses have too much noise the learning algorithm can lower the weight of the parse features since they are unlikely to be useful discriminators on the training datathis differs from the models of knight and marcu which treat the noisy parses as goldstandard when calculating probability estimatesan important distinction we should make is the notion of supported versus unsupported features supported features are those that are on for the gold standard compressions in the trainingfor instance the bigram feature nnvb will be supported since there is most likely a compression that contains a adjacent noun and verbhowever the feature jjvb will not be supported since an adjacent adjective and verb most likely will not be observed in any valid compressionour model includes all features including those that are unsupportedthe advantage of this is that the model can learn negative weights for features that are indicative of bad compressionsthis is not difficult to do since most features are pos based and the feature set size even with all these features is only 78923having defined a feature encoding and decoding algorithm the last step is to learn the feature weights w we do this using the margin infused relaxed algorithm which is a discriminative largemargin online learning technique shown in figure 3 on each iteration mira considers a single instance from the training set and updates the weights so that the score of the correct compression yt is greater than the score of all other compressions by a margin proportional to their lossmany weight vectors will satisfy these constraints so we pick the one with minimum change from the previous settingwe define the loss to be the number of words falsely retained or dropped in the incorrect compression relative to the correct onefor instance if the correct compression of the sentence in figure 2 is mary saw ralph then the compression mary saw after lunch would have a loss of 3 since it incorrectly left out one word and included two othersof course for a sentence there are exponentially many possible compressions which means that this optimization will have exponentially many constraintswe follow the method of mcdonald et al and create constraints only on the k compressions that currently have the highest score bestkthis can easily be calculated by extending the decoding algorithm with standard viterbi kbest techniqueson the development data we found that k 10 provided the best performance though varying k did not have a major impact overallfurthermore we found that after only 35 training epochs performance on the development data was maximizedthe final weight vector is the average of all weight vectors throughout trainingaveraging has been shown to reduce overfitting as well as reliance on the order of the examples during trainingwe found it to be particularly important for this data setwe use the same experimental methodology as knight and marcu we provide every compression to four judges and ask them to evaluate each one for grammaticality and importance on a scale from 1 to 5for each of the 32 sentences in our test set we ask the judges to evaluate three systems human annotated the decision tree model of knight and marcu and our systemthe judges were told all three compressions were automatically generated and the order in which they were presented was randomly chosen for each sentencewe compared our system to the decision tree model of knight and marcu instead of the noisychannel model since both performed nearly as well in their evaluation and the compression rate of the decision tree model is nearer to our system the noisychannel model typically returned longer compressionsresults are shown in table 1we present the average score over all judges as well as the standard deviationthe evaluation for the decision tree system of knight and marcu is strikingly similar to the original evaluation in their workthis provides strong evidence that the evaluation criteria in both cases were very similartable 1 shows that all models had similar compressions rates with humans preferring to compress a little more aggressivelynot surprisingly the human compressions are practically all grammaticala quick scan of the evaluations shows that the few ungrammatical human compressions were for sentences that were not really grammatical in the first placeof greater interest is that the compressions of our system are typically more grammatical than the decision tree model of knight and marcuwhen looking at importance we see that our system actually does the best even better than humansthe most likely reason for this is that our model returns longer sentences and is thus less likely to prune away important informationfor example consider the sentence the chemical etching process used for glare protection is effective and will help if your office has the fluorescentlight overkill that is typical in offices the human compression was glare protection is effective whereas our model compressed the sentence to the chemical etching process used for glare protection is effectivea primary reason that our model does better than the decision tree model of knight and marcu is that on a handful of sentences the decision tree compressions were a single word or nounphrasefor such sentences the evaluators typically rated the compression a 1 for both grammaticality and importancein contrast our model never failed in such drastic ways and always output something reasonablethis is quantified in the standard deviation of the two systemsthough these results are promising more large scale experiments are required to really ascertain the significance of the performance increaseideally we could sample multiple trainingtesting splits and use all sentences in the data set to evaluate the systemshowever since these systems require human evaluation we did not have the time or the resources to conduct these experimentshere we aim to give the reader a flavor of some common outputs from the different modelsthree examples are given in table 41the first shows two propertiesfirst of all the decision tree model completely breaks and just returns a single nounphraseour system performs well however it leaves out the complementizer of the relative clausethis actually occurred in a few examples and appears to be the most common problem of our modela postprocessing rule should eliminate thisthe second example displays a case in which our system and the human system are grammatical but the removal of a prepositional phrase hurts the resulting meaning of the sentencein fact without the knowledge that the sentence is referring to broadband the compressions are meaninglessthis appears to be a harder problem determining which prepositional phrases can be dropped and which cannotthe final and more interesting example presents two very different compressions by the human and our automatic systemhere the human kept the relative clause relating what languages the source code is available in but dropped the main verb phrase of the sentenceour model preferred to retain the main verb phrase and drop the relative clausethis is most likely due to the fact that dropping the main verb phrase of a sentence is much less likely in the training data than dropping a relative clausetwo out of four evaluators preferred the compression returned by our system and the other two rated them equalin this paper we have described a new system for sentence compressionthis system uses discriminative largemargin learning techniques coupled with a decoding algorithm that searches the space of all compressionsin addition we defined a rich feature set of bigrams in the compression and dropped words and phrases from the original sentencethe model also incorporates soft syntactic evidence in the form of features over dependency and phrasestructure trees for each sentencethis system has many advantages over previous approachesfirst of all its discriminative nature allows us to use a rich dependent feature set and to optimize a function directly related to compresthe fi rst new product atf protype is a line of digital postscript typefaces that will be sold in packages of up to six fontsatf protype is a line of digital postscript typefaces that will be sold in packages of up to six fonts the fi rst new product atf protype is a line of digital postscript typefaces will be sold in packages of up to six fonts finally another advantage of broadband is distanceanother advantage is distanceanother advantage of broadband is distanceanother advantage is distancethe source code which is available for c fortran ada and vhdl can be compiled and executed on the same system or ported to other target platforms the source code is available for c fortran ada and vhdl the source code is available for c the source code can be compiled and executed on the same system or ported to other target platforms sion accuracy during training both of which have been shown to be beneficial for other problemsfurthermore the system does not rely on the syntactic parses of the sentences to calculate probability estimatesinstead this information is incorporated as just another form of evidence to be considered during trainingthis is advantageous because these parses are trained on outofdomain data and often contain a significant amount of noisea fundamental flaw with all sentence compression systems is that model parameters are set with the assumption that there is a single correct answer for each sentenceof course like most compression and translation tasks this is not true consider tapeware which supports dos and netware 286 is a valueadded process that let us you directly connect the qa150exat to a file server and issue a command from any workstation to back up the server the human annotated compression is tapeware supports dos and netware 286however another completely valid compression might be tapeware let us you connect the qa150exat to a fi le serverthese two compressions overlap by a single wordour learning algorithm may unnecessarily lower the score of some perfectly valid compressions just because they were not the exact compression chosen by the human annotatora possible direction of research is to investigate multilabel learning techniques for structured data that learn a scoring function separating a set of valid answers from all invalid answersthus if a sentence has multiple valid compressions we can learn to score each valid one higher than all invalid compressions during training to avoid this problemthe author would like to thank daniel marcu for providing the data as well as the output of his and kevin knights systemsthanks also to hal daume and fernando pereira for useful discussionsfinally the author thanks the four reviewers for evaluating the compressed sentencesthis work was supported by nsf itr grants 0205448 and 0428193
E06-1038
discriminative sentence compression with soft syntactic evidencewe present a model for sentence compression that uses a discriminative largemargin learning framework coupled with a novel feature set defined on compressed bigrams as well as deep syntactic representations provided by auxiliary dependency and phrasestructure parsersthe parsers are trained outofdomain and contain a significant amount of noisewe argue that the discriminative nature of the learning algorithm allows the model to learn weights relative to any noise in the feature set to optimize compression accuracy directlythis differs from current stateoftheart models that treat noisy parse trees for both compressed and uncompressed sentences as gold standard when calculating model parameterswe provide a viterbilike dynamic programming algorithm to recover the highest scoring sequence of orderpreserving bigrams from a lattice either in unconstrained form or with a specific length constraintwe use the outputs of two parsers as features in a discriminative model that decomposes over pairs of consecutive wordswe use semimarkov model which allows incorporating a language model for the compression
comparing automatic and human evaluation of nlg systems we consider the evaluation problem in language generation and results for evaluating several systems with similar functionality including a knowledgebased generator and several statistical systems we compare evaluation results for these systems by human domain experts human nonexperts and several automatic evaluation metrics inand we that correlate best with human judgments but that all automatic metrics we examined are biased in favour of generators that select on the basis of frequency alone we conthat automatic evaluation of systems has considerable potential in particular where highquality reference texts and only a small number of human evaluators are available however in general it is probably best for automatic evaluations to be supported by humanbased evaluations or at least by studies that demonstrate that a particular metric correlates well with human judgments in a given domain evaluation is becoming an increasingly important topic in natural language generation as in other fields of computational linguisticssome nlg researchers are impressed by the success of the bleu evaluation metric in machine translation which has transformed the mt field by allowing researchers to quickly and cheaply evaluate the impact of new ideas algorithms and data setsbleu and related metrics work by comparing the output of an mt system to a set of reference translations and in principle this kind of evaluation could be done with nlg systems as wellindeed nlg researchers are already starting to use bleu in their evaluations as this is much cheaper and easier to organise than the human evaluations that have traditionally been used to evaluate nlg systemshowever the use of such corpusbased evaluation metrics is only sensible if they are known to be correlated with the results of humanbased evaluationswhile studies have shown that ratings of mt systems by bleu and similar metrics correlate well with human judgments we are not aware of any studies that have shown that corpusbased evaluation metrics of nlg systems are correlated with human judgments correlation studies have been made of individual components but not of systemsin this paper we present an empirical study of how well various corpusbased metrics agree with human judgments when evaluating several nlg systems that generate sentences which describe changes in the wind these systems do not perform content determination so our study does not address corpusbased evaluation of content determinationnlg systems have traditionally been evaluated using human subjects nlg evaluations have tended to be of the intrinsic type involving subjects reading and rating texts usually subjects are shown both nlg and humanwritten texts and the nlg system is evaluated by comparing the ratings of its texts and human textsin some cases subjects are shown texts generated by several nlg systems including a baseline system which serves as another point of comparisonthis methodology was first used in nlg in the mid1990s by coch and lester and porter and continues to be popular todayother extrinsic types of human evaluations of nlg systems include measuring the impact of different generated texts on task performance measuring how much experts postedit generated texts and measuring how quickly people read generated texts in recent years there has been growing interest in evaluating nlg texts by comparing them to a corpus of humanwritten textsas in other areas of nlp the advantages of automatic corpusbased evaluation are that it is potentially much cheaper and quicker than humanbased evaluation and also that it is repeatablecorpusbased evaluation was first used in nlg by langkilde who parsed texts from a corpus fed the output of her parser to her nlg system and then compared the generated texts to the original corpus textssimilar evaluations have been used egby bangalore et al and marciniak and strube such corpusbased evaluations have sometimes been criticised in the nlg community for example by reiter and sripada grounds for criticism include the fact that regenerating a parsed text is not a realistic nlg task that texts can be very different from a corpus text but still effectively meet the systems communicative goal and that corpus texts are often not of high enough quality to form a realistic testthe mt and document summarisation communities have developed evaluation metrics based on comparing output texts to a corpus of human texts and have shown that some of these metrics are highly correlated with human judgmentsthe bleu metric in mt has been particularly successful for example mt05 the 2005 nist mt evaluation exercise used bleu4 as the only method of evaluationbleu is a precision metric that assesses the quality of a translation in terms of the proportion of its word ngrams that it shares with one or more highquality reference translationsbleu scores range from 0 to 1 1 being the highest which can only be achieved by a translation if all its substrings can be found in one of the reference texts bleu should be calculated on a large test set with several reference translations properly calculated bleu scores have been shown to correlate reliably with human judgments the nist mt evaluation metric is an adaptation of bleu but where bleu gives equal weight to all ngrams nist gives more importance to less frequent ngramsbleus ability to detect subtle but important differences in translation quality has been questioned some research showing nist to be more sensitive the rouge metric was conceived as document summarisations answer to bleu but it does not appear to have met with the same degree of enthusiasmthere are several different rouge metricsthe simplest is rougen which computes the highest proportion in any reference summary of ngrams that are matched by the systemgenerated summarya procedure is applied that averages the score across leaveoneout subsets of the set of reference textsrougen is an almost straightforward ngram recall metric between two texts and has several counterintuitive properties including that even a text composed entirely of sentences from reference texts cannot score 1 there are several other variants of the rouge metric and rouge2 along with rougesu were among the official scores for the duc 2005 summarisation taskthe sumtime project developed an nlg system which generated textual weather forecasts from numerical forecast datathe sumtime system generates specialist forecasts for offshore oil rigsit has two modules a contentdetermination module that determines the content of the weather forecast by analysing the numerical data using linear segmentation and other data analysis techniques and a microplanning and realisation module which generates texts based on this content by choosing appropriate words deciding on aggregation enforcing the sublanguage grammar and so forthsumtime generates very highquality texts in some cases forecast users believe sumtime texts are better than humanwritten texts sumtime is a knowledgebased nlg systemwhile its design was informed by corpus analysis the system is based on manually authored rules and codeas part of the project the sumtime team created a corpus of 1045 forecasts from the commercial output of five different forecasters and the input data that the forecasters examined when they wrote the forecasts in other words the sumtime corpus contains both the inputs and the outputs of the forecastgeneration processthe sumtime team also derived a content representation from the corpus texts similar to that produced by sumtimes contentdetermination modulethe sumtime microplannerrealiser can be driven by these tuples this mode is called sumtimehybridtable 1 includes an example of the tuples extracted from the corpus text and a sumtimehybrid text produced from the tuples statistical nlg has focused on generateandselect models a set of alternatives is generated and one is selected with a language modelthis technique is computationally very expensivemoreover the only type of language model used in nlg are ngram models which have the additional disadvantage of a general preference for shorter realisations which can be harmful in nlg pcru1 language generation is a language generation framework that was designed to facilitate statistical generation techniques that are more efficient and less biasedin pcru generation a base generator is encoded as a set of generation rules made up of relations with zero or more atomic argumentsthe base generator is then trained on raw text corpora to provide a probability distribution over generation rulesthe resulting pcru generator can be run in several modes including the following random ignoring pcru probabilities randomly select generation rulesngram ignoring pcru probabilities generate set of alternatives and select the most likely according to a given ngram language modelgreedy select the most likely among each set of candidate generation rulesgreedy roulette select rules with likelihood proportional to their pcru probabilitythe greedy modes are deterministic and therefore considerably cheaper in computational terms than the equivalent ngram method the main goal of our experiments was to determine how well a variety of automatic evaluation metrics correlated with human judgments of text quality in nlga secondary goal was to determine if there were types of nlg systems for which the correlation of automatic and human evaluation was particularly good or baddata we extracted from each forecast in the sumtime corpus the first description of wind from every morning forecast which resulted in a set of about 500 wind forecastswe excluded several forecasts for which we had no input data or an incomplete set of system outputs this left 465 texts which we used in our evaluationthe inputs to the generators were tuples composed of an index timestamp wind direction wind speed range and gust speed range we randomly selected a subset of 21 forecast dates for use in human evaluationsfor these 21 forecast dates we also asked two meteorologists who had not contributed to the original sumtime corpus to write new forecasts texts we used these as reference texts for the automatic metricsthe forecasters created these texts by rewriting the corpus texts as this was a more natural task for them than writing texts based on tuples500 wind descriptions may seem like a small corpus but in fact provides very good coverage as the domain language is extremely simple involving only about 90 word forms and a small handful of different syntactic structuressystems and texts evaluated we evaluated four pcru generators and the sumtime system operating in hybrid mode for better comparability because the pcru generators do not perform content determinationa base pcru generator was created semiautomatically by running a chunker over the corpus extracting generation rules and adding some higherlevel rules taking care of aggregation elision etcthis base generator was then trained on 910 of the corpus 5 different random divisions of the corpus into training and testing data were used additionally a backoff 2gram model with goodturing discounting and no lexical classes was built from the same training data using the srilm toolkit forecasts were then generated for all corpus inputs in all four generation modes table 1 shows an example of an input to the systems along with the three human texts and the texts produced by all five nlg systems from this dataautomatic evaluations we used nist2 bleu3 and rouge4 to automatically evaluate the above systems and textswe computed bleun for n 14 we also computed nist5 and rouge4as a baseline we used stringedit distance with substitution at cost 2 and deletion and insertion at cost 1 and normalised to range 0 to 1 when multiple reference texts are used the se score for a generator forecast is the average of its scores against the reference texts the se score for a set of generator forecasts is the average of scores for individual forecastshuman evaluations we recruited 9 experts and 21 nonexperts subjects did not have a background in nlp and were native speakers of englishthey were shown forecast texts from all the generators and from the corpus and asked to score them on a scale of 0 to 5 for readability clarity and general appropriatenessexperts were additionally shown the numerical weather data that the forecast text was based onat the start subjects were shown two practice examplesthe experiments were carried out over the websubjects completed the experiment unsupervised at a time and place of their choosingexpert subjects were shown a randomly selected forecast for 18 of the datesthe nonexperts were shown 21 forecast texts in a repeated latin squares experimental design where each combination of date and system is assigned one evaluationtable 2 shows evaluation scores for the five nlg systems and the corpus texts as assessed by experts nonexperts nist5 bleu4 rouge4 and sescores are averaged over the 18 forecasts that were used in the expert experiments in order to make results as directly comparable as possiblehuman scores are normalised to range 0 to 1systems are ranked in order of the scores given to them by expertsall ranks are shown in brackets behind the absolute scoresboth experts and nonexperts score sumtimehybrid the highest and pcru2gram and pcrurandom the lowestthe experts have pcrugreedy in second place where the nonexperts have pcruroulettethe experts rank the corpus forecasts fourth the nonexperts secondwe used approximate randomisation as our significance test as recommended by riezler and maxwell iii pairwise tests between results in table 2 showed all but three differences to be significant with the likelihood of incorrectly rejecting the null hypothesis p 005 the exceptions were the differences in nist and se scores for sumtimehybridpcruroulette and the difference in bleu scores for sumtimehybridpcru2gramtable 3 shows pearson correlation coefficients for the metrics and humans in table 2the strongest correlation with experts and nonexperts is achieved by nist5 with rouge4 and se showing especially poor correlationbleu4 correlates fairly well with the nonexperts but less with the expertswe computed another correlation statistic which measures how well scores by an arbitrary single human or run of a metric correlate with the average scores by a set of humans or runs of a metricthis is computed as the average pcc between the scores assigned by individual humansruns of a metric and the average scores assigned by a set of humansruns of a metric for example the pcc for nonexperts and experts is 0845 but the average pcc between individual nonexperts and average expert judgment is only 0496 implying that an arbitrary nonexpert is not very likely to correlate well with average expert judgmentsexperts are better predictors for each others judgments than nonexperts interestingly it turns out that an arbitrary nist5 run is a better predictor of average expert opinion than an arbitrary single expert the number of forecasts we were able to use in our human experiments was small and to back up the results presented in table 2 we report nist5 bleu4 rouge4 and se scores averaged across the five test sets from the pcru validation runs in table 4the picture is similar to results for the smaller data set the rankings assigned by all metrics are the same except that nist5 and se have swapped the ranks of sumtimehybrid and pcruroulettepairwise ar tests showed all differences to be significant with p 005 except for the differences in bleu nist and rouge scores for sumtimehybridpcruroulette and the difference in bleu scores for sumtimehybridpcru2gramin both tables 2 and 4 there are two major differences between the rankings assigned by human and automatic evaluation human evaluators prefer sumtimehybrid over pcrugreedy whereas all the automatic metrics have it the other way around and human evaluators score pcruroulette highly whereas the automatic metrics score it very low second worst to random generation there are two clear tendencies in scores going from left to right across tables 2 and 4 sumtimehybrid goes down in rank and pcru2gram comes upin addition to the bleu4 scores shown in the tables we also calculated bleu1 bleu2 bleu3 scoresthese give similar results except that bleu1 and bleu2 rank pcruroulette as highly as the human judgesit is striking how low the experts rank the corpus texts and to what extent they disagree on their qualitythis appears to indicate that corpus quality is not idealif an imperfect corpus is used as the gold standard for the automatic metrics then high correlation with human judgments is less likely and this may explain the difference in human and automatic scores for sumtimehybridif we assume that the human evaluation scores are the most valid then the automatic metrics do not do a good job of comparing the knowledgebased sumtime system to the statistical systemsone reason for this could be that there are cases where sumtime deliberately does not choose the most common option in the corpus because its developers believed that it was not the best for readersfor example in table 1 the human forecasters and pcrugreedy use the phrase by late evening to refer to 0000 pcru2gram uses the phrase later while sumtimehybrid uses the phrase by midnightthe pcru choices reflect frequency in the sumtime corpus later and by late evening are more common than by midnight however forecast readers dislike this use of later and also dislike variants of by evening because they are unsure how to interpret them this is why sumtime uses by midnightthe sumtime system builders believe deviating from corpus frequency in such cases makes sumtime texts better from the readers perspective and it does appear to increase human ratings of the system but deviating from the corpus in such a way decreases the systems score under corpussimilarity metricsin other words judging the output of an nlg system by comparing it to corpus texts by a method that rewards corpus similarity will penalise systems which do not base choice on highest frequency of occurrence in the corpus even if this is motivated by careful studies of what is best for text readersthe mt community recognises that bleu is not effective at evaluating texts which are as good as the reference textsthis is not a problem for mt because the output of current mt systems is generally worse than human translationsbut it is an issue for nlg where systems are domainspecific and can generate texts that are judged better by humans than humanwritten texts although the automatic evaluation metrics generally replicated human judgments fairly well when comparing different statistical nlg systems there was a discrepancy in the ranking of pcruroulette pcruroulette differs from the other statistical generators because it does not always try to make the most common choice instead it tries to vary choicesin particular if there are several competing words and phrases with similar probabilities pcruroulette will tend to use different words and phrases in different texts whereas the other statistical generators will stick to those with the highest frequencythis behaviour is penalised by the automatic evaluation metrics but the human evaluators do not seem to mind itone of the classic rules of writing is to vary lexical and syntactic choices in order to keep text interestinghowever this behaviour will always reduce a systems score under corpussimilarity metrics even if it enhances text quality from the perspective of readersfoster and oberlander in their study of facial gestures have also noted that humans do not mind and indeed in some cases prefer variation whereas corpusbased evaluations give higher ratings to systems which follow corpus frequencyusing more reference texts does counteract this tendency but only up to a point no matter how many reference texts are used there will still be one or a small number of most frequent variants and using anything else will still worsen corpussimilarity scorescanvassing expert opinion of text quality and averaging the results is also in a sense frequencybased as results reflect what the majority of experts consider good variantsexpert opinions can vary considerably as shown by the low correlation among experts in our study and evaluations by a small number of experts may also be problematic unless we have good reason to believe that expert opinions are highly correlated in the domain ultimately such disagreement between experts suggests that judgments of the text quality whether by human or metric really should be be backed up by judgments of the effectiveness of a text in helping real users perform tasks or otherwise achieving its communicative goalwe plan to further investigate the performance of automatic evaluation measures in nlg in the future performing similar experiments to the one described here in other domains and with more subjects and larger test sets investigating whether automatic corpusbased techniques can evaluate content determination investigating how well both human ratings and corpusbased measures correlate with extrinsic evaluations of the effectiveness of generated textsultimately we would like to move beyond critiques of existing corpusbased metrics to proposing new metrics which work well for nlgcorpus quality plays a significant role in automatic evaluation of nlg textsautomatic metrics can be expected to correlate very highly with human judgments only if the reference texts used are of high quality or rather can be expected to be judged high quality by the human evaluatorsthis is especially important when the generated texts are of similar quality to humanwritten textsin mt highquality texts vary less than generally in nlg so bleu scores against 4 reference translations from reputable sources are a feasible evaluation regimeit seems likely that for automatic evaluation in nlg a larger number of reference texts than four are neededin our experiments we have found nist a more reliable evaluation metric than bleu and in particular rouge which did not seem to offer any advantage over simple stringedit distancewe also found individual experts judgments are not likely to correlate highly with average expert opinion in fact less likely than nist scoresthis seems to imply that if expert evaluation can only be done with one or two experts but a highquality reference corpus is available then a nistbased evaluation may produce more accurate results than an expertbased evaluationit seems clear that for automatic corpusbased evaluation to work well we need highquality reference texts written by many different authors and large enough to give reasonable coverage of phenomena such as variation for variations sakemetrics that do not exclusively reward similarity with reference texts are more likely to correlate well with human judges but all of the existing metrics that we looked at still penalised generators that do not always choose the most frequent variantthe results we have reported here are for a relatively simple sublanguage and domain and more empirical research needs to be done on how well different evaluation metrics and methodologies correlate with each otherin order to establish reliable and trusted automatic crosssystem evaluation methodologies it seems likely that the nlg community will need to establish how to collect large amounts of highquality reference texts and develop new evaluation metrics specifically for nlg that correlate more reliably with human judgments of text quality and appropriatenessultimately research should also look at developing new evaluation techniques that correlate reliably with the real world usefulness of generated textsin the shorter term we recommend that automatic evaluations of nlg systems be supported by conventional largescale humanbased evaluationsanja belzs part of the research reported in this paper was supported under uk epsrc grant grs2448001many thanks to john carroll roger evans and the anonymous reviewers for very helpful comments
E06-1040
comparing automatic and human evaluation of nlg systemswe consider the evaluation problem in natural language generation and present results for evaluating several nlg systems with similar functionality including a knowledgebased generator and several statistical systemswe compare evaluation results for these systems by human domain experts human nonexperts and several automatic evaluation metrics including nist bleu and rougewe find that nist scores correlate best with human judgments but that all automatic metrics we examined are biased in favour of generators that select on the basis of frequency alonewe conclude that automatic evaluation of nlg systems has considerable potential in particular where highquality reference texts and only a small number of human evaluators are availablehowever in general it is probably best for automatic evaluations to be supported by humanbased evaluations or at least by studies that demonstrate that a particular metric correlates well with human judgments in a given domainwe use several different evaluation techniques to evaluate the output of five nlg systems which generated wind descriptions for weather forecastswe demonstrate that automatic metrics can correlate highly with human ratings if the training dataset is of high qualitythe two automatic metrics used in the evaluations nist and bleu have been shown to correlate highly with expert judgments in this domain
a clustering approach for nearly unsupervised recognition of nonliteral language in this paper we present trofi a system for automatically classifying literal and nonliteral usages of verbs through nearly unsupervised wordsense disambiguation and clustering techniques trofi uses sentential context instead of selectional constraint violations or paths in semantic hierarchies it also uses literal and nonliteral seed sets acquired and cleaned without human supervision in order to bootstrap learning we adapt a wordsense disambiguation algorithm to our task and augment it with multiple seed set learners a voting schema and additional features like supertags and extrasentential context detailed experiments on handannotated data show that our enhanced algorithm outperforms the baseline by 244 using the trofi algorithm we also build the trofi example base an extensible resource of annotated literalnonliteral examples which is freely available to the nlp research community in this paper we propose trofi a nearly unsupervised clustering method for separating literal and nonliteral usages of verbsfor example given the target verb pour we would expect trofi to cluster the sentence custom demands that cognac be poured from a freshly opened bottle as literal and the sentence salsa and rap music pour out of the windows as nonliteral which indeed it doeswe call our method nearly unsupervisedsee section 31 for why we use this terminologywe reduce the problem of nonliteral language recognition to one of wordsense disambiguation by redefining literal and nonliteral as two different senses of the same word and we adapt an existing similaritybased wordsense disambiguation method to the task of separating usages of verbs into literal and nonliteral clustersthis paper focuses on the algorithmic enhancements necessary to facilitate this transformation from wordsense disambiguation to nonliteral language recognitionthe output of trofi is an expandable example base of literalnonliteral clusters which is freely available to the research communitymany systems that use nlp methods such as dialogue systems paraphrasing and summarization language generation information extraction machine translation etc would benefit from being able to recognize nonliteral languageconsider an example based on a similar example from an automated medical claims processing systemwe must determine that the sentence she hit the ceiling is meant literally before it can be marked up as an accident claimnote that the typical use of hit the ceiling stored in a list of idioms cannot help usonly using the context she broke her thumb while she was cheering for the patriots and in her excitement she hit the ceiling can we decidewe further motivate the usefulness of the ability to recognize literal vs nonliteral usages using an example from the recognizing textual entailment challenge of 2005in the challenge data pair 1959 was kerry hit bush hard on his conduct on the war in iraq kerry shot bushthe objective was to report false since the second statement in this case is not entailed from the first onein order to do this it is crucial to know that hit is being used nonliterally in the first sentenceideally we would like to look at trofi as a first step towards an unsupervised scalable widely applicable approach to nonliteral language processing that works on realworld data from any domain in any languagethe foundations of trofi lie in a rich collection of metaphor and metonymy processing systems everything from handcoded rulebased systems to statistical systems trained on large corporarulebased systems some using a type of interlingua others using complicated networks and hierarchies often referred to as metaphor maps must be largely handcoded and generally work well on an enumerable set of metaphors or in limited domainsdictionarybased systems use existing machinereadable dictionaries and path lengths between words as one of their primary sources for metaphor processing information corpusbased systems primarily extract or learn the necessary metaphorprocessing information from large corpora thus avoiding the need for manual annotation or metaphormap constructionexamples of such systems can be found in the work on supervised metonymy resolution by nissim markert and the work on conceptual metaphors by mason come closest to what we are trying to do with trofinissim markert approach metonymy resolution with machine learning methods which exploit the similarity between examples of conventional metonymy p 56they see metonymy resolution as a classification problem between the literal use of a word and a number of predefined metonymy typesthey use similarities between possibly metonymic words and known metonymies as well as context similarities to classify the pmwsthe main difference between the nissim markert algorithm and the trofi algorithm besides the fact that nissim markert deal with specific types of metonymy and not a generalized category of nonliteral language is that nissim markert use a supervised machine learning algorithm as opposed to the primarily unsupervised algorithm used by trofimason presents cormet a corpusbased system for discovering metaphorical mappings between concepts p 23his system finds the selectional restrictions of given verbs in particular domains by statistical meansit then finds metaphorical mappings between domains based on these selectional preferencesby finding semantic differences between the selectional preferences it can articulate the higherorder structure of conceptual metaphors p 24 finding mappings like liquidmoneylike cormet trofi uses contextual evidence taken from a large corpus and also uses wordnet as a primary knowledge source but unlike cormet trofi does not use selectional preferencesmetaphor processing has even been approached with connectionist systems storing worldknowledge as probabilistic dependencies trofi is not a metaphor processing systemit does not claim to interpret metonymy and it will not tell you what a given idiom meansrather trofi attempts to separate literal usages of verbs from nonliteral onesfor the purposes of this paper we will take the simplified view that literal is anything that falls within accepted selectional restrictions or our knowledge of the world nonliteral is then anything that is not literal including most tropes such as metaphors idioms as well phrasal verbs and other anomalous expressions that cannot really be seen as literalin terms of metonymy trofi may cluster a verb used in a metonymic expression such as i read keats as nonliteral but we make no strong claims about thisthe trofi algorithm requires a target set the set of sentences containing the verbs to be classified into literal or nonliteral and the seed sets the literal feedback set and the nonliteral feedback setthese sets contain feature lists consisting of the stemmed nouns and verbs in a sentence with target or seed words and frequent words removedthe frequent word list consists of the 332 most frequent words in the british national corpus plus contractions single letters and numbers from 010the target set is built using the 8889 wall street journal corpus tagged using the tagger and the supertagger the feedback sets are built using wsj sentences conalgorithm 1 ketrain algorithm adapted to literalnonliteral classification taining seed words extracted from wordnet and the databases of known metaphors idioms and expressions namely wayne magnuson english idioms sayings slang and george lakoffs conceptual metaphor list as well as example sentences from these sourcesone may ask why we need trofi if we have databases like the dokmiethe reason is that the dokmie are unlikely to list all possible instances of nonliteral language and because knowing that an expression can be used nonliterally does not mean that you can tell when it is being used nonliterallythe target verbs may not and typically do not appear in the feedback setsin addition the feedback sets are noisy and not annotated by any human which is why we call trofi unsupervisedwhen we use wordnet as a source of example sentences or of seed words for pulling sentences out of the wsj for building the literal feedback set we cannot tell if the wordnet synsets or the collected feature sets are actually literalwe provide some automatic methods in section 33 to ensure that the feedback set feature sets that will harm us in the clustering phase are removedas a sideeffect we may fill out sparse nonliteral setsin the next section we look at the core trofi algorithm and its use of the above data sourcessince we are attempting to reduce the problem of literalnonliteral recognition to one of wordsense disambiguation trofi makes use of an existing similaritybased wordsense disambiguation algorithm developed by henceforth kethe ke algorithm is based on the principle of attraction similarities are calculated between sentences containing the word we wish to disambiguate and collections of seed sentences a target set sentence is considered to be attracted to the feedback set containing the sentence to which it shows the highest similaritytwo sentences are similar if they contain similar words and two words are similar if they are contained in similar sentencesthe resulting transitive similarity allows us to defeat the knowledge acquisition bottleneck ie the low likelihood of finding all possible usages of a word in a single corpusnote that the ke algorithm concentrates on similarities in the way sentences use the target literal or nonliteral word not on similarities in the meanings of the sentences themselvesalgorithms 1 and 2 summarize the basic trofi version of the ke algorithmnote that p is the unigram probability of word w in sentence s normalized by the total number of words in s in practice initializing ssimi0 in line of algorithm 1 to 0 and then updating it from wsimo means that each target sentence is still maximally similar to itself but we also discover additional similarities between target sentenceswe further enhance the algorithm by using sum of similaritiesto implement this in algorithm 2 we change line into esy ssiml esy ssimn although it is appropriate for finegrained tasks like wordsense disambiguation to use the single highest similarity score in order to minimize noise summing across all the similarities of a target set sentence to the feedback set sentences is more appropriate for literalnonliteral clustering where the usages could be spread across numerous sentences in the feedback setswe make another modification to algorithm 2 by checking that the maximum sentence similarity in line is above a certain threshold for classificationif the similarity is above this threshold we label a targetword sentence as literal or nonliteralbefore continuing let us look at an examplethe features are shown in boldn2 this idea is risky but it looks like the director of the institute has comprehended the basic principles behind itn3 mrs fipps is having trouble comprehending the legal straits of the instituten4 she had a hand in his fully comprehending the quandarythe target set consists of sentences from the corpus containing the target wordthe feedback sets contain sentences from the corpus containing synonyms of the target word found in wordnet and the dokmie the feedback sets also contain example sentences provided in the targetword entries of these datasetstrofi attempts to cluster the target set sentences into literal and nonliteral by attracting them to the corresponding feature sets using algorithms 1 2using the basic ke algorithm target sentence 2 is correctly attracted to the nonliteral set and sentences 1 and 3 are equally attracted to both setswhen we apply our sum of similarities enhancement sentence 1 is correctly attracted to the literal set but sentence 3 is now incorrectly attracted to the literal set tooin the following sections we describe some enhancements learners voting supertags and context that try to solve the problem of incorrect attractionsin this section we describe how we clean up the feedback sets to improve the performance of the core algorithmwe also introduce the notion of learners votingrecall that neither the raw data nor the collected feedback sets are manually annotated for training purposessince in addition the feedback sets are collected automatically they are very noisyfor instance in the example in section 32 the literal feedback set sentence l3 contains an idiom which was provided as an example sentence in wordnet as a synonym for graspin n4 we have the sideeffect feature hand which unfortunately overlaps with the feature hand that we might hope to find in the literal set in order to remove sources of false attraction like these we introduce the notion of scrubbingscrubbing is founded on a few basic principlesthe first is that the contents of the dokmie come from human annotations and are thus trustedconsequently we take them as primary and use them to scrub the wordnet synsetsthe second is that phrasal and expression verbs for example throw away are often indicative of nonliteral uses of verbs ie they are not the sum of their parts so they can be used for scrubbingthe third is that content words appearing in both feedback sets for example the wind is blowing vs the winds of war are blowing for the target word blow will lead to impure feedback sets a situation we want to avoidthe fourth is that our scrubbing action can take a number of different forms we can choose to scrub just a word a whole synset or even an entire feature setin addition we can either move the offending item to the opposite feedback set or remove it altogethermoving synsets or feature sets can add valuable content to one feedback set while removing noise from the otherhowever it can also because unforeseen contaminationwe experimented with a number of these options to produce a whole complement of feedback set learners for classifying the target sentencesideally this will allow the different learners to correct each otherfor learner a we use phrasalexpression verbs and overlap as indicators to select whole wordnet synsets for moving over to the nonliteral feedback setin our example this causes l1l3 to be moved to the nonliteral setfor learner b we use phrasalexpression verbs and overlap as indicators to remove problematic synsetsthus we avoid accidentally contaminating the nonliteral sethowever we do end up throwing away information that could have been used to pad out sparse nonliteral setsin our example this causes l1l3 to be droppedfor learner c we remove feature sets from the final literal and nonliteral feedback sets based on overlapping wordsin our example this causes l2 and n4 to be droppedlearner d is the baseline no scrubbingwe simply use the basic algorithmeach learner has benefits and shortcomingsin order to maximize the former and minimize the latter instead of choosing the single most successful learner we introduce a voting systemwe use a simple majorityrules algorithm with the strongest learners weighted more heavilyin our experiments we double the weights of learners a and d in our example this results in sentence 3 now being correctly attracted to the nonliteral seteven before voting we attempt to improve the correctness of initial attractions through the use of supertags which allows us to add internal structure information to the bagofwords feature listssupertags encode a great deal of syntactic information in a single tag in addition to a words part of speech they also encode information about its location in a syntactic tree ie we learn something about the surrounding words as wellwe devised a supertag trigram composed of the supertag of the target word and the following two words and their supertags if they contain nouns prepositions particles or adverbsthis is helpful in cases where the same set of features can be used as part of both literal and nonliteral expressionsfor example turning it is hard to kick a habit like drinking into habit drink kickb nx0vpls1 habita nxn results in a higher attraction to sentences about kicking habits than to sentences like she has a habit of kicking me when she is been drinking note that the creation of learners a and b changes if supertags are usedin the original version we only move or remove synsets based on phrasalexpression verbs and overlapping wordsif supertags are used we also move or remove feature sets whose supertag trigram indicates phrasal verbs a final enhancement involves extending the context to help with disambiguationsometimes critical disambiguation features are contained not in the sentence with the target word but in an adjacent sentenceto add context we simply group the sentence containing the target word with a specified number of surrounding sentences and turn the whole group into a single feature settrofi was evaluated on the 25 target words listed in table 1the target sets contain from 1 to 115 manually annotated sentences for each verbthe first round of annotations was done by the first annotatorthe second annotator was given no instructions besides a few examples of literal and nonliteral usage the authors of this paper were the annotatorsour interannotator agreement on the annotations used as test data in the experiments in this paper is quite high n and n on a random sample of 200 annotated examples annotated by two different annotators was found to be 077as per cf refs therein the standard assessment for n values is that tentative conclusions on agreement exists when 67 n 8 and a definite conclusion on agreement exists when n 8in the case of a larger scale annotation effort having the person leading the effort provide one or two examples of literal and nonliteral usages for each target verb to each annotator would almost certainly improve interannotator agreementtable 1 lists the total number of target sentences plus the manually evaluated literal and nonliteral counts for each target wordit also provides the feedback set sizes for each target wordthe totals across all words are given at the bottom of the tablethe algorithms were evaluated based on how accurately they clustered the handannotated sentencessentences that were attracted to neither cluster or were equally attracted to both were put in the opposite set from their label making a failure to cluster a sentence an incorrect clusteringevaluation results were recorded as recall precision and fscore valuesliteral recall is defined as literal precision is defined as if there are no literals literal recall is 100 literal precision is 100 if there are no nonliterals in the literal cluster and 0 otherwisethe fscore is defined as nonliteral precision and recall are defined similarlyaverage precision is the average of literal and nonliteral precision similarly for average recallfor overall performance we take the fscore of average precision and average recallwe calculated two baselines for each wordthe first was a simple majorityrules baselinedue to the imbalance of literal and nonliteral examples this baseline ranges from 609 to 667 with an average of 636keep in mind though that using this baseline the fscore for the nonliteral set will always be 0we come back to this point at the end of this sectionwe calculated a second baseline using a simple attraction algorithmeach target set sentence is attracted to the feedback set containing the sentence with which it has the most words in commonthis corresponds well to the basic highest similarity trofi algorithmsentences attracted to neither or equally to both sets are put in the opposite cluster to where they belongsince this baseline actually attempts to distinguish between literal and nonliteral and uses all the data used by the trofi algorithm it is the one we will refer to in our discussion belowexperiments were conducted to first find the results of the core algorithm and then determine the effects of each enhancementthe results are shown in figure 1the last column in the graph shows the average across all the target verbson average the basic trofi algorithm gives a 76 improvement over the baseline with some words like lend and touch having higher results due to transitivity of similarityfor our sum of similarities enhancement all the individual target word results except for examine sit above the baselinethe dip is due to the fact that while trofi can generate some beneficial similarities between words related by context it can also generate some detrimental oneswhen we use sum of similarities it is possible for the transitively discovered indirect similarities between a target nonliteral sentence and all the sentences in a feedback set to add up to more than a single direct similarity between the target sentence and a single feedback set sentencethis is not possible with highest similarity because a single sentence would have to show a higher similarity to the target sentence than that produced by sharing an identical word which is unlikely since transitively discovered similarities generally do not add up to 1so although highest similarity occasionally produces better results than using sum of similarities on average we can expect to get better results with the latterin this experiment alone we get an average fscore of 463 for the sum of similarities results a 94 improvement over the high similarity results and a 169 improvement over the baseline in comparing the individual results of all our learners we found that the results for learners a and d eclipsed learners b and c by just over 25using majorityrules voting with learners a and d doubled we were able to obtain an average fscore of 484 showing that voting does to an extent balance out the learners varying results on different wordsthe addition of supertags caused improvements in some words like drag and stickthe overall gain was only 05 likely due to an overgeneration of similaritiesfuture work may identify ways to use supertags more effectivelythe use of additional context was responsible for our second largest leap in performance after sum of similaritieswe gained 49 bringing us to an average fscore of 538worth noting is that the target words exhibiting the most significant improvement drown and grasp had some of the smallest target and feedback set feature sets supporting the theory that adding cogent features may improve performancewith an average of 538 all words but one lie well above our simpleattraction baseline and some even achieve much higher results than the majorityrules baselinenote also that using this latter baseline trofi boosts the nonliteral fscore from 0 to 423in this section we discuss the trofi example basefirst we examine iterative augmentationthen we discuss the structure and contents of the example base and the potential for expansionafter an initial run for a particular target word we have the cluster results plus a record of the feedback sets augmented with the newly clustered sentenceseach feedback set sentence is saved with a classifier weight with newly clustered sentences receiving a weight of 10subsequent runs may be done to augment the initial clustersfor these runs we use the classifiers from our initial run as feedback setsnew sentences for clustering are treated like a regular target setrunning trofi produces new clusters and reweighted classifiers augmented with newly clustered sentencesthere can be as many runs as desired hence iterative augmentationwe used the iterative augmentation process to build a small example base consisting of the target words from table 1 as well as another 25 words drawn from the examples of scholars whose work was reviewed in section 2it is important to note that in building the example base we used trofi with an active learning component which improved our average fscore from 538 to 649 on the original 25 target wordsan excerpt from the example base is shown in figure 2each entry includes an id number and a nonliteral literal or unannotated tagannotations are from testing or from active learning during examplebase constructionthe trofi example base is available at httpwwwcssfucaanoopstudentsjbirkefurther unsupervised expansion of the existing clusters as well as the production of additional clusters is a possibilityin this paper we presented trofi a system for separating literal and nonliteral usages of verbs through statistical wordsense disambiguation and clustering techniqueswe suggest that trofi is applicable to all sorts of nonliteral language and that although it is currently focused on english verbs it could be adapted to other parts of speech and other languageswe adapted an existing wordsense disambiguation algorithm to literalnonliteral clustering through the redefinition of literal and nonliteral as word senses the alteration of the similarity scores used and the addition of learners and voting supertags and additional contextfor all our models and algorithms we carried out detailed experiments on handannotated data both to fully evaluate the system and to arrive at an optimal configurationthrough our enhancements we were able to produce results that are on average 169 higher than the core algorithm and 244 higher than the baselinefinally we used our optimal configuration of trofi together with active learning and iterative augmentation to build the trofi example base a publicly available expandable resource of literalnonliteral usage clusters that we hope will be useful not only for future research in the field of nonliteral language processing but also as training data for other statistical nlp tasks
E06-1042
a clustering approach for nearly unsupervised recognition of nonliteral languagein this paper we present trofi a system for automatically classifying literal and nonliteral usages of verbs through nearly unsupervised wordsense disambiguation and clustering techniquestrofi uses sentential context instead of selectional constraint violations or paths in semantic hierarchiesit also uses literal and nonliteral seed sets acquired and cleaned without human supervision in order to bootstrap learningwe adapt a wordsense disambiguation algorithm to our task and augment it with multiple seed set learners a voting schema and additional features like supertags and extrasentential contextdetailed experiments on handannotated data show that our enhanced algorithm outperforms the baseline by 244using the trofi algorithm we also build the trofi example base an extensible resource of annotated literalnonliteral examples which is freely available to the nlp research communityfor scoring literal recall is defined as literal precision is defined as we model literal vs nonliteral classification as a word sense disambiguation task and use a clustering algorithm which compares test instances to two automatically constructed seed sets assigning the label of the closest set
automatically constructing a lexicon of verb phrase idiomatic combinations we investigate the lexical and syntactic flexibility of a class of idiomatic expressions we develop measures that draw on such linguistic properties and demonstrate that these statistical corpusbased measures can be successfully used for distinguishing idiomatic combinations from nonidiomatic ones we also propose a means for automatically determining which syntactic forms a particular idiom can appear in and hence should be included in its lexical representation the term idiom has been applied to a fuzzy category with prototypical examples such as by and large kick the bucket and let the cat out of the bagproviding a definitive answer for what idioms are and determining how they are learned and understood are still subject to debate nonetheless they are often defined as phrases or sentences that involve some degree of lexical syntactic andor semantic idiosyncrasyidiomatic expressions as a part of the vast family of figurative language are widely used both in colloquial speech and in written languagemoreover a phrase develops its idiomaticity over time consequently new idioms come into existence on a daily basis idioms thus pose a serious challenge both for the creation of widecoverage computational lexicons and for the development of largescale linguistically plausible natural language processing systems one problem is due to the range of syntactic idiosyncrasy of idiomatic expressionssome idioms such as by and large contain syntactic violations these are often completely fixed and hence can be listed in a lexicon as words with spaces however among those idioms that are syntactically wellformed some exhibit limited morphosyntactic flexibility while others may be more syntactically flexiblefor example the idiom shoot the breeze undergoes verbal inflection but not internal modification or passivization in contrast the idiom spill the beans undergoes verbal inflection internal modification and even passivizationclearly a wordswithspaces approach does not capture the full range of behaviour of such idiomatic expressionsanother barrier to the appropriate handling of idioms in a computational system is their semantic idiosyncrasythis is a particular issue for those idioms that conform to the grammar rules of the languagesuch idiomatic expressions are indistinguishable on the surface from compositional phrases but a computational system must be capable of distinguishing the twofor example a machine translation system should translate the idiom shoot the breeze as a single unit of meaning whereas this is not the case for the literal phrase shoot the birdin this study we focus on a particular class of english phrasal idioms ie those that involve the combination of a verb plus a noun in its direct object positionexamples include shoot the breeze pull strings and push ones luckwe refer to these as verbnoun idiomatic combinations the class of vnics accommodates a large number of idiomatic expressions moreover their peculiar behaviour signifies the need for a distinct treatment in a computational lexicon despite this vnics have been granted relatively little attention within the computational linguistics communitywe look into two closely related problems confronting the appropriate treatment of vnics the problem of determining their degree of flexibility and the problem of determining their level of idiomaticitysection 2 elaborates on the lexicosyntactic flexibility of vnics and how this relates to their idiomaticityin section 3 we propose two linguisticallymotivated statistical measures for quantifying the degree of lexical and syntactic inflexibility of verbnoun combinationssection 4 presents an evaluation of the proposed measuresin section 5 we put forward a technique for determining the syntactic variations that a vnic can undergo and that should be included in its lexical representationsection 6 summarizes our contributionsalthough syntactically wellformed vnics involve a certain degree of semantic idiosyncrasyunlike compositional verbnoun combinations the meaning of vnics cannot be solely predicted from the meaning of their partsthere is much evidence in the linguistic literature that the semantic idiosyncrasy of idiomatic combinations is reflected in their lexical andor syntactic behavioura limited number of idioms have one lexical variants eg blow ones own trumpet and toot ones own horn however most are lexically fixed to a large extentneither shoot the wind nor fling the breeze are typically recognized as variations of the idiom shoot the breezesimilarly spill the beans has an idiomatic meaning while spill the peas and spread the beans have only literal interpretationsidiomatic combinations are also syntactically peculiar most vnics cannot undergo syntactic variations and at the same time retain their idiomatic interpretationsit is important however to note that vnics differ with respect to the degree of syntactic flexibility they exhibitsome are syntactically inflexible for the most part while others are more versatile as illustrated in 1 and 2 linguists have explained the lexical and syntactic flexibility of idiomatic combinations in terms of their semantic analyzability semantic analyzability is inversely related to idiomaticityfor example the meaning of shoot the breeze a highly idiomatic expression has nothing to do with either shoot or breezein contrast a less idiomatic expression such as spill the beans can be analyzed as spill corresponding to reveal and beans referring to secretgenerally the constituents of a semantically analyzable idiom can be mapped onto their corresponding referents in the idiomatic interpretationhence analyzable expressions are often more open to lexical substitution and syntactic variationwe use the observed connection between idiomaticity and flexibility to devise statistical measures for automatically distinguishing idiomatic from literal verbnoun combinationswhile vnics vary in their degree of flexibility on the whole they contrast with compositional phrases which are more lexically productive and appear in a wider range of syntactic formswe thus propose to use the degree of lexical and syntactic flexibility of a given verbnoun combination to determine the level of idiomaticity of the expressionit is important to note that semantic analyzability is neither a necessary nor a sufficient condition for an idiomatic combination to be lexically or syntactically flexibleother factors such as the communicative intentions and pragmatic constraints can motivate a speaker to use a variant in place of a canonical form nevertheless lexical and syntactic flexibility may well be used as partial indicators of semantic analyzability and hence idiomaticityhere we describe our measures for idiomaticity which quantify the degree of lexical syntactic and overall fixedness of a given verbnoun combination represented as a verbnoun paira vnic is lexically fixed if the replacement of any of its constituents by a semantically similar word generally does not result in another vnic but in an invalid or a literal expressionone way of measuring lexical fixedness of a given verbnoun combination is thus to examine the idiomaticity of its variants ie expressions generated by replacing one of the constituents by a similar wordthis approach has two main challenges it requires prior knowledge about the idiomaticity of expressions it needs information on similarity among wordsinspired by lin we examine the strength of association between the verb and noun constituents of the target combination and its variants as an indirect cue to their idiomaticitywe use the automaticallybuilt thesaurus of lin to find similar words to the noun of the target expression in order to automatically generate variantsonly the noun constituent is varied since replacing the verb constituent of a vnic with a semantically related verb is more likely to yield another vnic as in keeplose ones cool let be the set of the most similar nouns to the noun of the target pair we calculate the association strength for the target pair and for each of its variants using pointwise mutual information where and is the target noun is the set of all transitive verbs in the corpus is the set of all nouns appearing as the direct object of some verb is the frequency of and occurring as a verbobject pair is the total frequency of the target verb with any noun in is the total frequency of the noun in the direct object position of any verb in lin assumes that a target expression is noncompositional if and only if its value is significantly different from that of any of the variantsinstead we propose a novel technique that brings together the association strengths of the target and the variant expressions into a single measure reflecting the degree of lexical fixedness for the target pairwe assume that the target pair is lexically fixed to the extent that its deviates from the average of its variantsour measure calculates this deviation normalized using the samples standard deviation compared to compositional verbnoun combinations vnics are expected to appear in more restricted syntactic formsto quantify the syntactic fixedness of a target verbnoun pair we thus need to identify relevant syntactic patterns ie those that help distinguish vnics from literal verbnoun combinations translate the frequency distribution of the target pair in the identified patterns into a measure of syntactic fixednessdetermining a unique set of syntactic patterns appropriate for the recognition of all idiomatic combinations is difficult indeed exactly which forms an idiomatic combination can occur in is not entirely predictable nonetheless there are hypotheses about the difference in behaviour of vnics and literal verbnoun combinations with respect to particular syntactic variations linguists note that semantic analyzability is related to the referential status of the noun constituent which is in turn related to participation in certain morphosyntactic formsin what follows we describe three types of variation that are tolerated by literal combinations but are prohibited by many vnicspassivization there is much evidence in the linguistic literature that vnics often do not undergo passivization1 linguists mainly attribute this to the fact that only a referential noun can appear as the surface subject of a passive constructiondeterminer type a strong correlation exists between the flexibility of the determiner preceding the noun in a verbnoun combination and the overall flexibility of the phrase it is however important to note that the nature of the determiner is also affected by other factors such as the semantic properties of the nounpluralization while the verb constituent of a vnic is morphologically flexible the morphological flexibility of the noun relates to its referential statusa nonreferential noun constituent is expected to mainly appear in just one of the singular or plural formsthe pluralization of the noun is of course also affected by its semantic propertiesmerging the three variation types results in a pattern set of distinct syntactic patterns given in table 12 the second step is to devise a statistical measure that quantifies the degree of syntactic fixedness of a verbnoun pair with respect to the selected set of patterns we propose a measure that compares the syntactic behaviour of the target pair with that of a typical verbnoun pairsyntactic behaviour of a typical pair is defined as the prior probability distribution over the patterns in the prior probability of an individual pattern is estimated as the syntactic behaviour of the target verbnoun pair is defined as the posterior probability distribution over the patterns given the particular pairthe posterior probability of an individual pattern is estimated as the degree of syntactic fixedness of the target verbnoun pair is estimated as the divergence of its syntactic behaviour from the typical syntactic behaviour the divergence of the two probability distributions is calculated using a standard informationtheoretic measure the kullback leibler divergence kldivergence is always nonnegative and is zero if and only if the two distributions are exactly the samethus kldivergence is argued to be problematic because it is not a symmetric measurenonetheless it has proven useful in many nlp applications moreover the asymmetry is not an issue here since we are concerned with the relative distance of several posterior distributions from the same priorvnics are hypothesized to be in most cases both lexically and syntactically more fixed than literal verbnoun combinations we thus propose a new measure of idiomaticity to be a measure of the overall fixedness of a given pairwe define as where weights the relative contribution of the measures in predicting idiomaticityto evaluate our proposed fixedness measures we determine their appropriateness as indicators of idiomaticitywe pose a classification task in which idiomatic verbnoun pairs are distinguished from literal oneswe use each measure to assign scores to the experimental pairs we then classify the pairs by setting a threshold here the median score where all expressions with scores higher than the threshold are labeled as idiomatic and the rest as literalwe assess the overall goodness of a measure by looking at its accuracy and the relative reduction in error rate on the classification task described abovethe rer of a measure reflects the improvement in its accuracy relative to another measure we consider two baselines a random baseline that randomly assigns a label to each verbnoun pair a more informed baseline an informationtheoretic measure widely used for extracting statistically significant collocations3 we use the british national corpus to extract verb noun pairs along with information on the syntactic patterns they appear inwe automatically parse the corpus using the collins parser and further process it using tgrep2 for each instance of a transitive verb we use heuristics to extract the noun phrase in either the direct object position or the subject position we then use nphead extraction software4 to get the head noun of the extracted np its number and the determiner introducing itwe select our development and test expressions from verbnoun pairs that involve a member of a predefined list of basic verbsbasic verbs in their literal use refer to states or acts that are central to human experiencethey are thus frequent highly polysemous and tend to combine with other words to form idiomatic combinations an initial list of such verbs was selected from several linguistic and psycholinguistic studies on basic vocabulary we further augmented this initial list with verbs that are semantically related to another verb already in the from the corpus we extract all verbnoun pairs with minimum frequency of that contain a basic verbfrom these we semirandomly select an idiomatic and a literal subset5 a pair is considered idiomatic if it appears in a credible idiom dictionary such as the oxford dictionary of current idiomatic english or the collins cobuild idioms dictionary otherwise the pair is considered literalwe then randomly pull out development and test pairs ensuring both low and high frequency items are includedsample idioms corresponding to the extracted pairs are kick the habit move mountains lose face and keep ones worddevelopment expressions are used in devising the fixedness measures as well as in determining the values of the parameters in eqn and in eqn determines the maximum number of nouns similar to the target noun to be considered in measuring the lexical fixedness of a given pairthe value of this parameter is determined by performing experiments over the development data in which ranges from to by steps of is set to based on the resultswe also experimented with different values of ranging from to by steps of based on the development results the best value for is test expressions are saved as unseen data for the final evaluationwe further divide the set of all test expressions test into two sets corresponding to two frequency bands test contains idiomatic and literal pairs each with total frequency between and test consists of idiomatic and literal pairs each with total frequency of or greater all frequency counts are over the entire bncwe first examine the performance of the individual fixedness measures and 5in selecting literal pairs we choose those that involve a physical act corresponding to the basic semantics of the verb as well as that of the two baselines and see table 2as can be seen the informed baseline shows a large improvement over the random baseline this shows that one can get relatively good performance by treating verbnoun idiomatic combinations as collocations performs as well as the informed baseline this result shows that as hypothesized lexical fixedness is a reasonably good predictor of idiomaticitynonetheless the performance signifies a need for improvementpossibly the most beneficial enhancement would be a change in the way we acquire the similar nouns for a target nounthe best performance belongs to with error reduction over the random baseline and error reduction over the informed baselinethese results demonstrate that syntactic fixedness is a good indicator of idiomaticity better than a simple measure of collocation or a measure of lexical fixednessthese results further suggest that looking into deep linguistic properties of vnics is both necessary and beneficial for the appropriate treatment of these expressions is known to perform poorly on low frequency datato examine the effect of frequency on the measures we analyze their performance on the two divisions of the test data corresponding to the two frequency bands test and test results are given in table 3 with the best performance shown in boldfaceas expected the performance of drops substantially for low frequency itemsinterestingly although it is a pmibased measure performs slightly better when the data is separated based on frequencythe performance of improves quite a bit when it is applied to high frequency items while it improves only slightly on the low frequency itemsthese results show that both fixedness measures perform better on homogeneous data while retaining comparably good performance on heterogeneous datathese results reflect that our fixedness measures are not as sensitive to frequency as hence they can be used with a higher degree of confidence especially when applied to data that is heterogeneous with regard to frequencythis is important because while some vnics are very common others have very low frequencytable 4 presents the performance of the hybrid measure repeating that of and for comparison outperforms both lexical and syntactic fixedness measures with a substantial improvement over and a small but notable improvement over each of the lexical and syntactic fixedness measures is a good indicator of idiomaticity on its own with syntactic fixedness being a better predictorhere we demonstrate that combining them into a single measure of fixedness while giving more weight to the better measure results in a more effective predictor of idiomaticityour evaluation of the fixedness measures demonstrates their usefulness for the automatic recognition of idiomatic verbnoun pairsto represent such pairs in a lexicon however we must determine their canonical formcforms henceforthfor example the lexical representation of shoot breeze should include shoot the breeze as a cformsince vnics are syntactically fixed they are mostly expected to have a single cformnonetheless there are idioms with two or more acceptable formsfor example hold fire and hold ones fire are both listed in ccid as variations of the same idiomour approach should thus be capable of predicting all allowable forms for a given idiomatic verbnoun pairwe expect a vnic to occur in its cform more frequently than it occurs in any other syntactic patternsto discover the cform for a given idiomatic verbnoun pair we thus examine its frequency of occurrence in each syntactic pattern in since it is possible for an idiom to have more than one cform we cannot simply take the most dominant pattern as the canonical oneinstead we calculate a score for the target pair and each pattern in which is the mean and the standard deviation over the sample the statistic indicates how far and in which direction the frequency of occurrence of the pair in pattern deviates from the samples mean expressed in units of the samples standard deviationto decide whether is a canonical pattern for the target pair we check whether where is a thresholdfor evaluation we set to based on the distribution of and through examining the development datawe evaluate the appropriateness of this approach in determining the cform of idiomatic pairs by verifying its predicted forms against odcie and ccidspecifically for each of the idiomatic pairs in test we calculate the precision and recall of its predicted cforms compared to the cforms listed in the two dictionariesthe average precision across the 100 test pairs is 817 and the average recall is 880 moreover we find that for the overwhelming majority of the pairs the predicted cform with the highest score appears in the dictionary entry of the pairthus our method of detecting cforms performs quite wellthe significance of the role idioms play in language has long been recognizedhowever due to their peculiar behaviour idioms have been mostly overlooked by the nlp communityrecently there has been growing awareness of the importance of identifying noncompositional multiword expressions nonetheless most research on the topic has focused on compound nouns and verb particle constructionsearlier work on idioms have only touched the surface of the problem failing to propose explicit mechanisms for appropriately handling themhere we provide effective mechanisms for the treatment of a broadly documented and crosslinguistically frequent class of idioms ie vnicsearlier research on the lexical encoding of idioms mainly relied on the existence of human annotations especially for detecting which syntactic variations an idiom can undergo we propose techniques for the automatic acquisition and encoding of knowledge about the lexicosyntactic behaviour of idiomatic combinationswe put forward a means for automatically discovering the set of syntactic variations that are tolerated by a vnic and that should be included in its lexical representationmoreover we incorporate such information into statistical measures that effectively predict the idiomaticity level of a given expressionin this regard our work relates to previous studies on determining the compositionality of mwes other than idiomsmost previous work on compositionality of mwes either treat them as collocations or examine the distributional similarity between the expression and its constituents lin and wermter and hahn go one step further and look into a linguistic property of noncompositional compoundstheir lexical fixednessto identify themvenkatapathy and joshi combine aspects of the abovementioned work by incorporating lexical fixedness collocationbased and distributional similarity measures into a set of features which are used to rank verbnoun combinations according to their compositionalityour work differs from such studies in that it carefully examines several linguistic properties of vnics that distinguish them from literal combinationsmoreover we suggest novel techniques for translating such characteristics into measures that predict the idiomaticity level of verbnoun combinationsmore specifically we propose statistical measures that quantify the degree of lexical syntactic and overall fixedness of such combinationswe demonstrate that these measures can be successfully applied to the task of automatically distinguishing idiomatic combinations from nonidiomatic oneswe also show that our syntactic and overall fixedness measures substantially outperform a widely used measure of collocation even when the latter takes syntactic relations into accountothers have also drawn on the notion of syntactic fixedness for idiom detection though specific to a highly constrained type of idiom our syntactic fixedness measure looks into a broader set of patterns associated with a large class of idiomatic expressionsmoreover our approach is general and can be easily extended to other idiomatic combinationseach measure we use to identify vnics captures a different aspect of idiomaticity reflects the statistical idiosyncrasy of vnics while the fixedness measures draw on their lexicosyntactic peculiaritiesour ongoing work focuses on combining these measures to distinguish vnics from other idiosyncratic verbnoun combinations that are neither purely idiomatic nor completely literal so that we can identify linguistically plausible classes of verbnoun combinations on this continuum
E06-1043
automatically constructing a lexicon of verb phrase idiomatic combinationswe investigate the lexical and syntactic flexibility of a class of idiomatic expressionswe develop measures that draw on such linguistic properties and demonstrate that these statistical corpusbased measures can be successfully used for distinguishing idiomatic combinations from nonidiomatic oneswe also propose a means for automatically determining which syntactic forms a particular idiom can appear in and hence should be included in its lexical representationto measure fixedness we use statistical measures of lexical syntactic and overall fixednesswe come up with a dozen possible syntactic forms for verbobject pairs and use a corpus based statistical measure to determine the canonical form
exploiting shallow linguistic information for relation extraction from biomedical literature we propose an approach for extracting relations between entities from biomedical literature based solely on shallow linguistic information we use a combination of kernel functions to integrate two different information sources the whole sentence where the relation appears and the local contexts around the interacting entities we performed experiments on extracting gene and protein interactions from two different data sets the results show that our approach outperforms most of the previous methods based on syntactic and semantic information information extraction is the process of finding relevant entities and their relationships within textual documentsapplications of ie range from semantic web to bioinformaticsfor example there is an increasing interest in automatically extracting relevant information from biomedical literaturerecent evaluation campaigns on bioentity recognition such as biocreative and jnlpba 2004 shared task have shown that several systems are able to achieve good performance however relation identification is more useful from an applicative perspective but it is still a considerable challenge for automatic toolsin this work we propose a supervised machine learning approach to relation extraction which is applicable even when linguistic processing is not available or reliablein particular we explore a kernelbased approach based solely on shallow linguistic processing such as tokenization sentence splitting partofspeech tagging and lemmatizationkernel methods show their full potential when an explicit computation of the feature map becomes computationally infeasible due to the high or even infinite dimension of the feature spacefor this reason kernels have been recently used to develop innovative approaches to relation extraction based on syntactic information in which the examples preserve their original representations and are compared by the kernel function despite the positive results obtained exploiting syntactic information we claim that there is still room for improvement relying exclusively on shallow linguistic information for two main reasonsfirst of all previous comparative evaluations put more stress on the deep linguistic approaches and did not put as much effort on developing effective methods based on shallow linguistic informationa second reason concerns the fact that syntactic parsing is not always robust enough to deal with realworld sentencesthis may prevent approaches based on syntactic features from producing any resultanother related issue concerns the fact that parsers are available only for few languages and may not produce reliable results when used on domain specific texts for example most of the participants at the learning language in logic challenge on genic interaction extraction were unable to successfully exploit linguistic information provided by parsersit is still an open issue whether the use of domainspecific treebanks can be successfully exploited to overcome this problemtherefore it is essential to better investigate the potential of approaches based exclusively on simple linguistic featuresin our approach we use a combination of kernel functions to represent two distinct information sources the global context where entities appear and their local contextsthe whole sentence where the entities appear is used to discover the presence of a relation between two entities similarly to what was done by bunescu and mooney windows of limited size around the entities provide useful clues to identify the roles of the entities within a relationthe approach has some resemblance with what was proposed by roth and yih the main difference is that we perform the extraction task in a single step via a combined kernel while they used two separate classifiers to identify entities and relations and their output is later combined with a probabilistic global inferencewe evaluated our relation extraction algorithm on two biomedical data sets the motivations for using these benchmarks derive from the increasing applicative interest in tools able to extract relations between relevant entities in biomedical texts and consequently from the growing availability of annotated data setsthe experiments show clearly that our approach consistently improves previous resultssurprisingly it outperforms most of the systems based on syntactic or semantic information even when this information is manually annotated the problem considered here is that of identifying interactions between genes and proteins from biomedical literaturemore specifically we performed experiments on two slightly different benchmark data sets in the former geneprotein interactions are annotated without distinguishing the type and roles of the two interacting entitiesthe latter is more realistic because it also aims at identifying the roles played by the interacting entities for example in figure 1 three entities are mentioned and two of the six ordered pairs of geniatopicscorpusgtbhtml entities actually interact cwlh and in our approach we cast relation extraction as a classification problem in which examples are generated from sentences as followsfirst of all we describe the complex case namely the proteingene interactions for this data set entity recognition is performed using a dictionary of protein and gene names in which the type of the entities is unknownwe generate examples for all the sentences containing at least two entitiesthus the number of examples generated for each sentence is given by the combinations of distinct entities selected two at a time ienc2for example as the sentence shown in figure 1 contains three entities the total number of examples generated is 3c2 3in each example we assign the attribute candidate to each of the candidate interacting entities while the other entities in the example are assigned the attribute other meaning that they do not participate in the relationif a relation holds between the two candidate interacting entities the example is labeled 1 or 2 0 otherwisefigure 2 shows the examples generated from the sentence in figure 1note that in generating the examples from the sentence in figure 1 we did not create three negative examples thereby implicitly undersampling the data setthis allows us to make the classification task simpler without loosing informationas a matter of fact generating examples for each ordered pair of entities would produce two subsets of the same size containing similar examples but with different classification labelsfurthermore undersampling allows us to halve the data set size and reduce the data skewnessfor the proteinprotein interaction task we use the correct entities provided by the manual annotationas said at the beginning of this section this task is simpler than the lll challenge because there is no distinction between types and roles as a consequence the examples are generated as described above with the following difference an example is labeled 1 if a relation holds between the two candidate interacting entities 0 otherwisethe basic idea behind kernel methods is to embed the input data into a suitable feature space f via a mapping function 0 x f and then use a linear algorithm for discovering nonlinear patternsinstead of using the explicit mapping 0 we can use a kernel function k x x x r that corresponds to the inner product in a feature space which is in general different from the input spacekernel methods allow us to design a modular system in which the kernel function acts as an interface between the data and the learning algorithmthus the kernel function is the only domain specific module of the system while the learning algorithm is a general purpose componentpotentially any kernel function can work with any kernelbased algorithmin our approach we use support vector machines in order to implement the approach based on shallow linguistic information we employed a linear combination of kernelsdifferent works empirically demonstrate the effectiveness of combining kernels in this way showing that the combined kernel always improves the performance of the individual onesin addition this formulation allows us to evaluate the individual contribution of each information sourcewe designed two families of kernels global context kernels and local context kernels in which each single kernel is explicitly calculated as follows where 0 is the embedding vector and ii ii is the 2normthe kernel is normalized by the product of the norms of embedding vectorsthe normalization factor plays an important role in allowing us to integrate information from heterogeneous feature spaceseven though the resulting feature space has high dimensionality an efficient computation of equation 1 can be carried out explicitly since the input representations defined below are extremely sparsein the authors observed that a relation between two entities is generally expressed using only words that appear simultaneously in one of the following three patterns forebetween tokens before and between the two candidate interacting entitiesfor instance binding of p1 to p2 interaction involving p1 and p2 association of p1 by p2between only tokens between the two candidate interacting entitiesfor instance p1 associates with p2 p1 binding to p2 p1 inhibitor of p2betweenafter tokens between and after the two candidate interacting entitiesfor instance p1 p2 association p1 and p2 interact p1 has influence on p2 bindingour global context kernels operate on the patterns above where each pattern is represented using a bagofwords instead of sparse subsequences of words pos tags entity and chunk types or wordnet synsets as in more formally given a relation example r we represent a pattern p as a row vector where the function tf records how many times a particular token tz is used in p note that this approach differs from the standard bagofwords as punctuation and stop words are included in op while the entities are notto improve the classification performance we have further extended op to embed ngrams of tokens by substituting op into equation 1 we obtain the ngram kernel kn which counts common unigrams bigrams ngrams that two patterns have in common2the global context kernel kgc is then defined as where kfb kb and kba are ngram kernels that operate on the forebetween between and betweenafter patterns respectivelythe type of the candidate interacting entities can provide useful clues for detecting the agent and target of the relation as well as the presence of the relation itselfas the type is not known we use the information provided by the two local contexts of the candidate interacting entities called left and right local context respectivelyas typically done in entity recognition we represent each local context by using the following basic features token the token itselflemma the lemma of the tokenpos the pos tag of the tokenorthographic this feature maps each token into equivalence classes that encode attributes such as capitalization punctuation numerals and so onformally given a relation example r a local context l t_w t_1 t0 t1 tw is represented as a row vector where fi is a feature function that returns 1 if it is active in the specified position of l 0 otherwise3the local context kernel klc is defined as where kleft and kright are defined by substituting the embedding of the left and right local context into equation 1 respectivelynotice that klc differs substantially from kgc as it considers the ordering of the tokens and the feature space is enriched with pos lemma and orthographic featuresfinally the shallow linguistic kernel it follows directly from the explicit construction of the feature space and from closure properties of kernels that ksl is a valid kernelthe two data sets used for the experiments concern the same domain however they present a crucial difference which makes it worthwhile to show the experimental results on both of themin one case interactions are considered symmetric while in the other agents and targets of genic interactions have to be identifiedthe first data set used in the experiments is the aimed corpus4 previously used for training protein interaction extraction systems in it consists of 225 medline abstracts 200 are known to describe interactions between human proteins while the other 25 do not refer to any interactionthere are 4084 protein references and around 1000 tagged interactions in this data setin this data set there is no distinction between genes and proteins and the relations are symmetricthis data set was used in the learning language in logic challenge on genic interaction extraction5 the objective of the challenge was to evaluate the performance of systems based on machine learning techniques to identify geneprotein interactions and their roles agent or targetthe data set was collected by querying medline on bacillus subtilis transcription and sporulationit is divided in a training set and a test set differently from the training set the test set contains sentences without interactionsthe data set is decomposed in two subsets of increasing difficultythe first subset does not include coreferences while the second one includes simple cases of coreference mainly appositionsboth subsets are available with different kinds of annotation basic and enrichedthe former includes word and sentence segmentationthe latter also includes manually checked information such as lemma and syntactic dependenciesa dictionary of named entities is associated to the data setbefore describing the results of the experiments a note concerning the evaluation methodologythere are different ways of evaluating performance in extracting information as noted in for the extraction of slot fillers in the seminar announcement and the job posting data setsadapting the proposed classification to relation extraction the following two cases can be identified figure 3 shows a fragment of tagged text drawn from the aimed corpusit contains three different interactions between pairs of proteins for a total of seven occurrences of interactionsfor example there are three occurrences of the interaction between igfir and p52shc if we adopt the oaod methodology all the seven occurrences have to be extracted to achieve the maximum scoreon the other hand if we use the oard methodology only one occurrence for each interaction has to be extracted to maximize the scoreon the aimed data set both evaluations were performed while on the lll challenge only the oaod evaluation methodology was performed because this is the only one provided by the evaluation server of the challengefigure 3 fragment of the aimed corpus with all proteins and their interactions taggedthe protein names have been highlighted in bold face and their same subscript numbers indicate interaction between the proteinsall the experiments were performed using the svm package libsvm6 customized to embed our own kernelfor the lll challenge submission we optimized the regularization parameter c by 10fold cross validation while we used its default value for the aimed experimentin both experiments we set the costfactor wz to be the ratio between the number of negative and positive examplesksl performance was first evaluated on the aimed data set we first give an evaluation of the kernel combination and then we compare our results with the subsequence kernel for relation extraction described in all experiments are conducted using 10fold cross validation on the same data splitting used in table 1 shows the performance of the three kernels defined in section 3 for proteinprotein interactions using the two evaluation methodologies described abovewe report in figure 4 the precisionrecall curves of erk and ksl using oard evaluation methodology as in the graph points are obtained by varying the threshold on the classififinally figure 5 shows the learning curve of the combined kernel ksl using the oard evaluation methodologythe curve reaches a plateau with around 100 medline abstractsthe system was evaluated on the basic version of the lll challenge data set table 2 shows the results of ksl returned by the scoring service8 for the three subsets of the training set table 3 shows the best results obtained at the official competition performed in april 2005comparing the results we see that ksl trained on each subset outperforms the best systems of the lll challenge9notice that the best results at the challenge were obtained by different groups and exploiting the linguistic enriched version of the data setas observed in the scores obtained using the training set without coreferences and the whole training set are similarwe also report in table 4 an analysis of the kernel combinationgiven that we are interested here in the contribution of each kernel we evaluated the experiments by 10fold crossvalidation on the whole training set avoiding the submission processthe experimental results show that the combined kernel ksl outperforms the basic kernels kgc and klc on both data setsin particular precision significantly increases at the expense of a lower recallhigh precision is particularly advantageous when extracting knowledge from large corpora because it avoids overloading end users with too many false positivesalthough the basic kernels were designed to model complementary aspects of the task achieved a significant improvement fl 684 and fl 647 presence of the relation and roles of the interacting entities they perform reasonably well even when considered separatelyin particular kgc achieved good performance on both data setsthis result was not expected on the lll challenge because this task requires not only to recognize the presence of relationships between entities but also to identify their roleson the other hand the outcomes of klc on the aimed data set show that such kernel helps to identify the presence of relationships as wellat first glance it may seem strange that kgc outperforms erk on aimed as the latter approach exploits a richer representation sparse subsequences of words pos tags entity and chunk types or wordnet synsetshowever an approach based on ngrams is sufficient to identify the presence of a relationshipthis result sounds less surprising if we recall that both approaches cast the relation extraction problem as a text categorization taskapproaches to text categorization based on rich linguistic information have obtained less accuracy than the traditional bagofwords approach shallow linguistics information seems to be more effective to model the local context of the entitiesfinally we obtained worse results performing dimensionality reduction either based on generic linguistic assumptions or using statistical methods this may be explained by the fact that in tasks like entity recognition and relation extraction useful clues are also provided by high frequency tokens such as stop words or punctuation marks and by the relative positions in which they appearfirst of all the obvious references for our work are the approaches evaluated on aimed and lll challenge data setsin the authors present a generalized subsequence kernel that works with sparse sequences containing combinations of words and pos tagsthe best results on the lll challenge were obtained by the group from the university of edinburgh which used markov logic a framework that combines loglinear models and first order logic to create a set of weighted clauses which can classify pairs of gene named entities as genic interactionsthese clauses are based on chains of syntactic and semantic relations in the parse or discourse representation structure of a sentence respectivelyother relevant approaches include those that adopt kernel methods to perform relation extractionzelenko et al describe a relation extraction algorithm that uses a tree kernel defined over a shallow parse tree representation of sentencesthe approach is vulnerable to unrecoverable parsing errorsculotta and sorensen describe a slightly generalized version of this kernel based on dependency trees in which a bagofwords kernel is used to compensate for errors in syntactic analysisa further extension is proposed by zhao and grishman they use composite kernels to integrate information from different syntactic sources so that processing errors occurring at one level may be overcome by information from other levelsbunescu and mooney present an alternative approach which uses information concentrated in the shortest path in the dependency tree between the two entitiesas mentioned in section 1 another relevant approach is presented in classifiers that identify entities and relations among them are first learned from local information in the sentencethis information along with constraints induced among entity types and relations is used to perform global probabilistic inference that accounts for the mutual dependencies among the entitiesall the previous approaches have been evaluated on different data sets so that it is not possible to have a clear idea of which approach is better than the otherthe good results obtained using only shallow linguistic features provide a higher baseline against which it is possible to measure improvements obtained using methods based on deep linguistic processingin the near future we plan to extend our work in several waysfirst we would like to evaluate the contribution of syntactic information to relation extraction from biomedical literaturewith this aim we will integrate the output of a parser second we plan to test the portability of our model on ace and muc data setsthird we would like to use a named entity recognizer instead of assuming that entities are already extracted or given by a dictionaryour long term goal is to populate databases and ontologies by extracting information from large text collections such as medlinewe would like to thank razvan bunescu for providing detailed information about the aimed data set and the settings of the experimentsclaudio giuliano and lorenza romano have been supported by the ontotext project funded by the autonomous province of trento under the fup2004 research program
E06-1051
exploiting shallow linguistic information for relation extraction from biomedical literaturewe propose an approach for extracting relations between entities from biomedical literature based solely on shallow linguistic informationwe use a combination of kernel functions to integrate two different information sources the whole sentence where the relation appears and the local contexts around the interacting entitieswe performed experiments on extracting gene and protein interactions from two different data setsthe results show that our approach outperforms most of the previous methods based on syntactic and semantic informationin addition to word features we extract shallow linguistic information such as pos tag lemma and orthographic features of tokens for ppi extraction
personalizing pagerank for word sense disambiguation in this paper we propose a new graphbased method that uses the knowledge in a lkb in order to perform unsupervised word sense disambiguation our algorithm uses the full graph of the lkb efficiently performing better than previous approaches in english allwords datasets we also show that the algorithm can be easily ported to other languages with good results with the only requirement of having a wordnet in addition we make an analysis of the performance of the algorithm showing that it is efficient and that it could be tuned to be faster word sense disambiguation is a key enablingtechnology that automatically chooses the intended sense of a word in contextsupervised wsd systems are the best performing in public evaluations but they need large amounts of handtagged data which is typically very expensive to buildgiven the relatively small amount of training data available current stateoftheart systems only beat the simple most frequent sense baseline1 by a small marginas an alternative to supervised systems knowledgebased wsd systems exploit the information present in a lexical knowledge base to perform wsd without using any further corpus evidencetraditional knowledgebased wsd systems assign a sense to an ambiguous word by comparing each of its senses with those of the surrounding contexttypically some semantic similarity metric is used for calculating the relatedness among senses one of the major drawbacks of these approaches stems from the fact that senses are compared in a pairwise fashion and thus the number of computations can grow exponentially with the number of wordsalthough alternatives like simulated annealing and conceptual density were tried most of past knowledge based wsd was done in a suboptimal wordbyword process ie disambiguating words one at a timerecently graphbased methods for knowledgebased wsd have gained much attention in the nlp community these methods use wellknown graphbased techniques to find and exploit the structural properties of the graph underlying a particular lkbbecause the graph is analyzed as a whole these techniques have the remarkable property of being able to find globally optimal solutions given the relations between entitiesgraphbased wsd methods are particularly suited for disambiguating word sequences and they manage to exploit the interrelations among the senses in the given contextin this sense they provide a principled solution to the exponential explosion problem with excellent performancegraphbased wsd is performed over a graph composed by senses and relations between pairs of senses the relations may be of several types and may have some weight attached to themthe disambiguation is typically performed by applying a ranking algorithm over the graph and then assigning the concepts with highest rank to the corresponding wordsgiven the computational cost of using large graphs like wordnet many researchers use smaller subgraphs built online for each target contextin this paper we present a novel graphbased wsd algorithm which uses the full graph of wordnet efficiently performing significantly better that previously published approaches in english allwords datasetswe also show that the algorithm can be easily ported to other languages with good results with the only requirement of having a wordnetthe algorithm is publicly available2 and can be applied easily to sense inventories and knowledge bases different from wordnetour analysis shows that our algorithm is efficient compared to previously proposed alternatives and that a good choice of wordnet versions and relations is fundamental for good performancethe paper is structured as followswe first describe the pagerank and personalized pagerank algorithmssection 3 introduces the graph based methods used for wsdsection 4 shows the experimental setting and the main results and section 5 compares our methods with related experiments on graphbased wsd systemssection 6 shows the results of the method when applied to a spanish datasetsection 7 analyzes the performance of the algorithmfinally we draw some conclusions in section 8the celebrated pagerank algorithm is a method for ranking the vertices in a graph according to their relative structural importancethe main idea of pagerank is that whenever a link from vi to vj exists in a graph a vote from node i to node j is produced and hence the rank of node j increasesbesides the strength of the vote from i to j also depends on the rank of node i the more important node i is the more strength its votes will havealternatively pagerank can also be viewed as the result of a random walk process where the final rank of node i represents the probability of a random walk over the graph ending on node i at a sufficiently large timelet g be a graph with n vertices vi vn and di be the outdegree of node i let m be a in the equation v is a n 1 vector whose elements are 1n and c is the so called damping factor a scalar value between 0 and 1the first term of the sum on the equation models the voting scheme described in the beginning of the sectionthe second term represents loosely speaking the probability of a surfer randomly jumping to any node eg without following any paths on the graphthe damping factor usually set in the 085095 range models the way in which these two terms are combined at each stepthe second term on eq can also be seen as a smoothing factor that makes any graph fulfill the property of being aperiodic and irreducible and thus guarantees that pagerank calculation converges to a unique stationary distributionin the traditional pagerank formulation the vector v is a stochastic normalized vector whose element values are all 1n thus assigning equal probabilities to all nodes in the graph in case of random jumpshowever as pointed out by the vector v can be nonuniform and assign stronger probabilities to certain kinds of nodes effectively biasing the resulting pagerank vector to prefer these nodesfor example if we concentrate all the probability mass on a unique node i all random jumps on the walk will return to i and thus its rank will be high moreover the high rank of i will make all the nodes in its vicinity also receive a high rankthus the importance of node i given by the initial distribution of v spreads along the graph on successive iterations of the algorithmin this paper we will use traditional pagerank to refer to the case when a uniform v vector is used in eq and whenever a modified v is used we will call it personalized pagerankthe next section shows how we define a modified v pagerank is actually calculated by applying an iterative algorithm which computes eq successively until convergence below a given threshold is achieved or more typically until a fixed number of iterations are executedregarding pagerank implementation details we chose a damping value of 085 and finish the calculation after 30 iterationswe did not try other damping factorssome preliminary experiments with higher iteration counts showed that although sometimes the node ranks varied the relative order among particular word synsets remained stable after the initial iterations note that in order to discard the effect of dangling nodes we slightly modified eqfor the sake of brevity we omit the details which the interested reader can check in in this section we present the application of pagerank to wsdif we were to apply the traditional pagerank over the whole wordnet we would get a contextindependent ranking of word senses which is not what we wantgiven an input piece of text we want to disambiguate all openclass words in the input taken the rest as contextin this framework we need to rank the senses of the target words according to the other words in the contexttheare two main alternatives to achieve this the first method has been explored in the literature and we also presented a variant in but the second method is novel in wsdin both cases the algorithms return a list of ranked senses for each target word in the contextwe will see each of them in turn but first we will present some notation and a preliminary stepa lkb is formed by a set of concepts and relations among them and a dictionary ie a list of words each of them linked to at least one concept of the lkbgiven any such lkb we build an undirected graph g where nodes represent lkb concepts and each relation between concepts vi and vj is represented by an undirected edge eijin our experiments we have tried our algorithms using three different lkbs given an input text we extract the list wi i 1 m of content words which have an entry in the dictionary and thus can be related to lkb conceptslet conceptsi v1 vi be the i am associated concepts of word wi in the lkb graphnote that monosemous words will be related to just one concept whereas polysemous words may be attached to severalas a result of the disambiguation process every concept in conceptsi i 1 m receives a scorethen for each target word to be disambiguated we just choose its associated concept in g with maximal scorein our experiments we build a context of at least 20 content words for each sentence to be disambiguated taking the sentences immediately before and after it in the case that the original sentence was too shortwe follow the algorithm presented in which we explain here for completenessthe main idea of the subgraph method is to extract the subgraph of gkb whose vertices and relations are particularly relevant for a given input contextsuch a subgraph is called a disambiguation subgraph gd and it is built in the following wayfor each word wi in the input context and each concept vi e conceptsi a standard breathfirst search over gkb is performed starting at node vieach run of the bfs calculates the minimum distance paths between vi and the rest of concepts of gkb in particular we are interested in the minimum distance paths between vi and the concepts associated to the rest of the words in the context vj e uj4i conceptsjlet mdpvi be the set of these shortest pathsthis bfs computation is repeated for every concept of every word in the input context storing mdpvi accordinglyat the end we obtain a set of minimum length paths each of them having a different concept as a sourcethe disambiguation graph gd is then just the union of the vertices and edges of the shortest paths gd umi1mdpvvj e conceptsithe disambiguation graph gd is thus a subgraph of the original gkb graph obtained by computing the shortest paths between the concepts of the words cooccurring in the contextthus we hypothesize that it captures the most relevant concepts and relations in the knowledge base for the particular input contextonce the gd graph is built we compute the traditional pagerank algorithm over itthe intuition behind this step is that the vertices representing the correct concepts will be more relevant in gd than the rest of the possible concepts of the context words which should have less relations on average and be more isolatedas usual the disambiguation step is performed by assigning to each word wi the associated concept in conceptsi which has maximum rankin case of ties we assign all the concepts with maximum ranknote that the standard evaluation script provided in the senseval competitions treats multiple senses as if one was chosen at random ie for evaluation purposes our method is equivalent to breaking ties at randomas mentioned before personalized pagerank allows us to use the full lkbwe first insert the context words into the graph g as nodes and link them with directed edges to their respective conceptsthen we compute the personalized pagerank of the graph g by concentrating the initial probability mass uniformly over the newly introduced word nodesas the words are linked to the concepts by directed edges they act as source nodes injecting mass into the concepts they are associated with which thus become relevant nodes and spread their mass over the lkb graphtherefore the resulting personalized pagerank vector can be seen as a measure of the structural relevance of lkb concepts in the presence of the input contextone problem with personalized pagerank is that if one of the target words has two senses which are related by semantic relations those senses reinforce each other and could thus dampen the effect of the other senses in the contextwith this observation in mind we devised a variant where we build the graph for each target word in the context for each target word wi we concentrate the initial probability mass in the senses of the words surrounding wi but not in the senses of the target word itself so that context words increase its relative importance in the graphthe main idea of this approach is to avoid biasing the initial score of concepts associated to target word wi and let the surrounding words decide which concept associated to wi has more relevancecontrary to the other two approaches ppr w2w does not disambiguate all target words of the context in a single run which makes it less efficient in this paper we will use two datasets for comparing graphbased wsd methods namely the senseval2 and senseval3 all words datasets which are both labeled with wordnet 17 tagswe did not use the semeval dataset for the sake of comparing our results to related work none of which used semeval datatable 1 shows the results as recall of the graphbased wsd system over these datasets on the different lkbswe detail overall results as well as results per pos and the confidence interval for the overall resultsthe interval was computed using bootstrap resampling with 95 confidencethe table shows that ppr w2w is consistently the best method in both datasets and for all lkbsppr and spr obtain comparable results which is remarkable given the simplicity of the ppr algobaseline and the best results of supervised systems at competition time rithm compared to the more elaborate algorithm to construct the graphthe differences between methods are not statistically significant which is a common problem on this relatively small datasets regarding lkbs the best results are obtained using wordnet 17 and extended wordnethere the differences are in many cases significantthese results are surprising as we would expect that the manually disambiguated gloss relations from wordnet 30 would lead to better results compared to the automatically disambiguated gloss relations from the extended wordnet the lower performance of wnet30gloss can be due to the fact that the senseval all words data set is tagged using wordnet 17 synsetswhen using a different lkb for wsd a mapping to wordnet 17 is requiredalthough the mapping is cited as having a correctness on the high 90s it could have introduced sufficient noise to counteract the benefits of the handdisambiguated glossestable 1 also shows the most frequent sense as well as the best supervised systems that participated in each competition the mfs is a baseline for supervised systems but it is considered a difficult competitor for unsupervised systems which rarely come close to itin this case the mfs baseline was computed using previously availabel training data like semcorour best results are close to the mfs in both senseval2 and senseval3 datasetsthe results for the supervised system are given for reference and we can see that the gap is relatively small specially for senseval3in this section we will briefly describe some graphbased methods for knowledgebased wsdthe methods here presented cope with the problem of sequencelabeling ie they disambiguate all the words coocurring in a sequence all the methods rely on the information represented on some lkb which typically is some version of wordnet sometimes enriched with proprietary relationsthe results on our datasets when available are shown in table 2the table also shows the performance of supervised systemsthe texrank algorithm for wsd creates a complete weighted graph formed by the synsets of the words in the input contextthe weight of the links joining two synsets is calculated by executing lesks algorithm between them ie by calculating the overlap between the words in the glosses of the correspongind sensesonce the complete graph is built the pagerank algorithm is executed over it and words are assigned to the most relevant synsetin this sense pagerank is used an alternative to simulated annealing to find the optimal pairwise combinationsthe method was evaluated on the senseval3 dataset as shown in row mih05 on table 2 extends their previous work by using a collection of semantic similarity measures when assigning a weight to the links across synsetsthey also compare different graphbased centrality algorithms to rank the vertices of the complete graphthey use different similarity metrics for different pos types and a voting scheme among the centrality algorithm rankshere the senseval3 corpus was used as a development data set and we can thus see those results as the upperbound of their methodwe can see in table 2 that the methods presented in this paper clearly outperform both mih05 and sin07this result suggests that analyzing the lkb structure as a whole is preferable than computing pairwise similarity measures over synsetsthe results of various inhouse made experiments replicating also confirm this observationnote also that our methods are simpler than the combination strategy used in and that we did not perform any parameter tuning as they didin the authors develop a knowledgebased wsd method based on lexical chains called structural semantic interconnections although the system was first designed to find the meaning of the words in wordnet glosses the authors also apply the method for labeling text sequencesgiven a text sequence ssi first identifies monosemous words and assigns the corresponding synset to themthen it iteratively disambiguates the rest of terms by selecting the senses that get the strongest interconnection with the synsets selected so farthe interconnection is calculated by searching for paths on the lkb constrained by some handmade rules of possible semantic patternsthe method was evaluated on the senseval3 dataset as shown in row nav05 on table 2note that the method labels an instance with the most frequent sense of the word if the algorithm produces no output for that instance which makes comparison to our system unfair specially given the fact that the mfs performs better than ssiin fact it is not possible to separate the effect of ssi from that of the mfsfor this reason we place this method close to the mfs baseline in table 2in the authors perform a twostage process for wsdgiven an input context the method first explores the whole lkb in order to find a subgraph which is particularly relevant for the words of the contextthen they study different graphbased centrality algorithms for deciding the relevance of the nodes on the subgraphas a result every word of the context is attached to the highest ranking concept among its possible sensesthe spr method is very similar to the main difference lying on the initial method for extracting the context subgraphwhereas apply a depthfirst search algorithm over the lkb graph and restrict the depth of the subtree to a value of 3 spr relies on shortest paths between word synsetsnavigli and lapata do not report overall results and therefore we cannot directly compare our results with theirshowever we can see that on a posbasis evaluation our results are consistently better for nouns and verbs and rather similar for adjectives is another example of a twostage process the first one consisting on finding a relevant subgraph by performing a bfs dataset including mfs and the best supervised system in the competition search over the lkbthe authors apply a spreading activation algorithm over the subgraph for node rankingedges of the subgraph are weighted according to its type following a tfidf like approachthe results show that our methods clearly outperform tsatsa07the fact that the spr method works better suggests that the traditional pagerank algorithm is a superior method for ranking the subgraph nodesas stated before all methods presented here use some lkb for performing wsd and use wordnet relations as a knowledge source but neither of them specify which particular version did they use uses wordnet 17 enriched with extended wordnet relations just as we doboth use wordnet 20 as the underlying lkb albeit enriched with several new relations which are manually createdunfortunately those manual relations are not publicly available so we cannot directly compare their results with the rest of the methodsin we experiment with different lkbs formed by combining relations of different mcr versions along with relations extracted from semcor which we call supervised and unsupervised relations respectivelythe unsupervised relations that yielded bests results are also used in this paper our wsd algorithm can be applied over nonenglish texts provided that a lkb for this particular language existswe have tested the graphalgorithms proposed in this paper on a spanish dataset using the spanish wordnet as knowledge source we used the semeval2007 task 09 dataset as evaluation gold standard the dataset contains examples of the 150 most frequent nouns in the cessece corpus manually annotated with spanish wordnet synsetsit is split into a train and test part and has an all words shape ie input consists on sentences each one having at least one occurrence of a target nounwe ran the experiment over the test part and used the train part for calculating the mfs baselinewe used the spanish wordnet as lkb enriched with extended wordnet relationsit contains 105 501 nodes and 623316 relationsthe results in table 3 are consistent with those for english with our algorithm approaching mfs performancenote that for this dataset the supervised algorithm could barely improve over the mfs suggesting that for this particular dataset mfs is particularly strongtable 4 shows the time spent by the different algorithms when applied to the senseval2 all words dataset using the wnet17 xwn as lkbthe dataset consists on 2473 word instances appearing on 476 different sentencesthe experiments were done on a computer with four 266 ghz processors and 16 gb memorythe table shows that the time elapsed by the algorithms varies between 30 minutes for the ppr method to almost 3 hours spent by the ppr w2w method the spr method lies in between requiring 2 hours for completing the task but its overall performance is well below the pagerank based ppr w2w methodnote that the algorithm is coded in c for greater efficiency and uses the boost graph libraryregarding pagerank calculation we have tried different numbers of iterations and analyze the rate of convergence of the algorithmfigure 1 depicts the performance of the ppr w2w method for different iterations of the algorithmas before the algorithm is applied over the mcr17 xwn lkb and evaluated on the senseval2 all words datasetthe algorithm converges very quickly one sole iteration suffices for achieving a relatively high performance and 20 iterations are enough for achieving convergencethe figure shows that depending on the lkb complexity the user can tune the algorithm and lower the number of iterations thus considerably reducing the time required for disambiguationin this paper we propose a new graphbased method that uses the knowledge in a lkb in order to perform unsupervised word sense disambuationour algorithm uses the full graph of the lkb efficiently performing better than previous approaches in english allwords datasetswe also show that the algorithm can be easily ported to other languages with good results with the only requirement of having a wordnetboth for spanish and english the algorithm attains performances close to the mfsthe algorithm is publicly available5 and can be applied easily to sense inventories and knowledge bases different from wordnetour analysis shows that our algorithm is efficient compared to previously proposed alternatives and that a good choice of wordnet versions and relations is fundamental for good performancethis work has been partially funded by the eu commission and spanish research department
E09-1005
personalizing pagerank for word sense disambiguationin this paper we propose a new graphbased method that uses the knowledge in a lkb in order to perform unsupervised word sense disambiguationour algorithm uses the full graph of the lkb efficiently performing better than previous approaches in english allwords datasetswe also show that the algorithm can be easily ported to other languages with good results with the only requirement of having a wordnetin addition we make an analysis of the performance of the algorithm showing that it is efficient and that it could be tuned to be fasterwe propose personalized pagerank that tries to tradeoff between the amount of the employed lexical information and the overall efficiencywe initialize the ranks of the vertex at a uniform value we present a novel use of pagerank for word sense disambiguationthe key idea is to adapt the matrix initialization step in order to exploit the available contextual evidence
bayesian word sense induction sense induction seeks to automatically identify word senses directly from a corpus a key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaning sense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a words contexts into different classes each representing a word sense our work places sense induction in a bayesian context by modeling the contexts of the ambiguous word as samples from a multinomial distribution over senses which are in turn characterized as distributions over words the bayesian framework provides a principled way to incorporate a wide range of features beyond lexical cooccurrences and to systematically assess their utility on the sense induction task the proposed approach yields improvements over stateoftheart systems on a benchmark dataset sense induction is the task of discovering automatically all possible senses of an ambiguous wordit is related to but distinct from word sense disambiguation where the senses are assumed to be known and the aim is to identify the intended meaning of the ambiguous word in contextalthough the bulk of previous work has been devoted to the disambiguation problem1 there are good reasons to believe that sense induction may be able to overcome some of the issues associated with wsdsince most disambiguation methods assign senses according to and with the aid of dictionaries or other lexical resources it is difficult to adapt them to new domains or to languages where such resources are scarcea related problem concerns the granularity of the sense distinctions which is fixed and may not be entirely suitable for different applicationsin contrast when sense distinctions are inferred directly from the data they are more likely to represent the task and domain at handthere is little risk that an important sense will be left out or that irrelevant senses will influence the resultsfurthermore recent work in machine translation and information retrieval indicates that induced senses can lead to improved performance in areas where methods based on a fixed sense inventory have previously failed sense induction is typically treated as an unsupervised clustering problemthe input to the clustering algorithm are instances of the ambiguous word with their accompanying contexts and the output is a grouping of these instances into classes corresponding to the induced sensesin other words contexts that are grouped together in the same class represent a specific word sensein this paper we adopt a novel bayesian approach and formalize the induction problem in a generative modelfor each ambiguous word we first draw a distribution over senses and then generate context words according to this distributionit is thus assumed that different senses will correspond to distinct lexical distributionsin this framework sense distinctions arise naturally through the generative process our model postulates that the observed data are explicitly intended to communicate a latent structure our work is related to latent dirichlet allocation a probabilistic model of text generationlda models each document using a mixture over k topics which are in turn characterized as distributions over wordsthe words in the document are generated by repeatedly sampling a topic according to the topic distribution and selecting a word given the chosen topicwhereas lda generates words from global topics corresponding to the whole document our model generates words from local topics chosen based on a context window around the ambiguous worddocumentlevel topics resemble general domain labels and cannot faithfully model more finegrained meaning distinctionsin our work therefore we create an individual model for every word rather than a global model for an entire document collectionwe also show how multiple information sources can be straightforwardly integrated without changing the underlying probabilistic modelfor instance besides lexical information we may want to consider parts of speech or dependencies in our sense induction problemthis is in marked contrast with previous ldabased models which mostly take only wordbased information into accountwe evaluate our model on a recently released benchmark dataset and demonstrate improvements over the stateoftheartthe remainder of this paper is structured as followswe first present an overview of related work and then describe our bayesian model in more detail section 5 describes the resources and evaluation methodology used in our experimentswe discuss our results in section 6 and conclude in section 7sense induction is typically treated as a clustering problem where instances of a target word are partitioned into classes by considering their cooccurring contextsconsiderable latitude is allowed in selecting and representing the cooccurring contextsprevious methods have used first or second order cooccurrences parts of speech and grammatical relations the size of the context window also varies it can be a relatively small such as two words before and after the target word the sentence within which the target is found or even larger such as the 20 surrounding words on either side of the target in essence each instance of a target word is represented as a feature vector which subsequently serves as input to the chosen clustering methoda variety of clustering algorithms have been employed ranging from kmeans to agglomerative clustering and the information bottleneck graphbased methods have also been applied to the sense induction taskin this framework words are represented as nodes in the graph and vertices are drawn between the target and its cooccurrencessenses are induced by identifying highly dense subgraphs in the cooccurrence graph although lda was originally developed as a generative topic model it has recently gained popularity in the wsd literaturethe inferred documentlevel topics can help determine coarsegrained sense distinctionscai et al propose to use ldas wordtopic distributions as features for training a supervised wsd systemin a similar vein boydgraber and blei infer lda topics from a large corpus however for unsupervised wsdhere lda topics are integrated with mccarthy et als algorithmfor each target word a topic is sampled from the documents topic distribution and a word is generated from that topicalso a distributional neighbor is selected based on the topic and distributional similarity to the generated wordthen the word sense is selected based on the word neighbor and topicboydgraber et al extend the topic modeling framework to include wordnet senses as a latent variable in the word generation processin this case the model discovers both the topics of the corpus and the senses assigned to each of its wordsour own model is also inspired by lda but crucially performs word sense induction not disambiguationunlike the work mentioned above we do not rely on a preexisting list of senses and do not assume a correspondence between our automatically derived senseclusters and those of any given inventory2 a key element in these previous attempts at adapting lda for wsd is the tendency to remain at a high level documentlike settingin contrast we make use of much smaller units of text and create an individual model for each word typeour induced senses are few in number this is in marked contrast to tens and sometimes hundreds of topics commonly used in documentmodeling tasksunlike many conventional clustering methods our model is probabilistic it specifies a probability distribution over possible values which makes it easy to integrate and combine with other systems via mixture or product modelsfurthermore the bayesian framework allows the incorporation of several information sources in a principled mannerour model can easily handle an arbitrary number of feature classes this functionality in turn enables us to evaluate which linguistic information matters for the sense induction taskprevious attempts to handle multiple information sources in the lda framework have been taskspecific and limited to only two layers of informationour model provides this utility in a general framework and could be applied to other tasks besides sense inductionthe core idea behind sense induction is that contextual information provides important cues regarding a words meaningthe idea dates back to firth and underlies most wsd and lexicon acquisition work to dateunder this premise we should expect different senses to be signaled by different lexical distributionswe can place sense induction in a probabilistic setting by modeling the context words around the ambiguous target as samples from a multinomial sense distributionmore formally we will write p for the distribution over senses s of an ambiguous target in a specific context window and p for the probability distribution over context words w given sense s each word wi in the context window is generated by first sampling a sense from the sense distribution then choosing a word from the sensecontext distributionp denotes the probability that the jth sense was sampled for the ith word token and p the probability of context word wi under sense jthe model thus specifies a distribution over words within a context window where s is the number of senseswe assume that each target word has c contexts and each context c cate conditional dependencies between variables whereas plates refer to repetitions of sampling stepsthe variables in the lower right corner refer to the number of samples consists of nc word tokenswe shall write as a shorthand for p the multinomial distribution over words for sense j and 0 as a shorthand for the distribution of senses in context c following blei et al we will assume that the mixing proportion over senses 0 is drawn from a dirichlet prior with parameters athe role of the hyperparameter a is to create a smoothed sense distributionwe also place a symmetric dirichlet r on the hyperparmeter r can be interpreted as the prior observation count on the number of times context words are sampled from a sense before any word from the corpus is observedour model is represented in graphical notation in figure 1the model sketched above only takes word information into accountmethods developed for supervised wsd often use a variety of information sources based not only on words but also on lemmas parts of speech collocations and syntactic relationships the first idea that comes to mind is to use the same model while treating various features as wordlike elementsin other words we could simply assume that the contexts we wish to model are the union of all our featuresalthough straightforward this solution is undesirableit merges the distributions of distinct feature categories into a single one and is therefore conceptually incorrect and can affect the performance of the modelfor instance partsofspeech would share a distribution with words layers containing more elements would overwhelm rectangles represent different sources of informationall layers share the same instancespecific sense distribution but each have their own sensefeature distribution shaded nodes represent observed features f these can be words parts of speech collocations or dependencies unconditional joint distribution p of the unobserved variables in our model each element in each layer is a variable and is assigned a sense label from these assignments we must determine the sense distribution of the instance as a wholethis is the purpose of the gibbs sampling procedurespecifically in order to derive the update function used in the gibbs sampler we must provide the conditional probability of the ith variable being assigned sense si in layer l given the feature value fi of the context variable and the current sense assignments of all the other variables in the data p p p the probability of a single sense assignment si is proportional to the product of the likelihood and the prior probability of the assignment smaller ones our solution is to treat each information source individually and then combine all of them together in a unified modelour underlying assumption is that the context window around the target word can have multiple representations all of which share the same sense distributionwe illustrate this in figure 2 where each inner rectangle corresponds to a distinct feature typewe will naively assume independence between multiple layers even though this is clearly not the case in our taskthe idea here is to model each layer as faithfully as possible to the empirical data while at the same time combining information from all layers in estimating the sense distribution of each target instanceour inference procedure is based on gibbs sampling the procedure begins by randomly initializing all unobserved random variablesat each iteration each random variable si is sampled from the conditional distribution p where si refers to all variables other than sieventually the distribution over samples drawn from this process will converge to the f p pdo _ rl for the likelihood term p integrating over all possible values of the multinomial featuresense distribution gives us the rightmost term in equation 3 which has an intuitive interpretationthe term indicates the number of times the featurevalue fi was assigned sense si in the rest of the datasimilarly indicates the number of times the sense assignment si was observed in the datarl is the dirichlet prior for the featuresense distribution in the current layer l and vl is the size of the vocabulary of that layer ie the number of possible feature values in the layerintuitively the probability of a featurevalue given a sense is directly proportional to the number of times we have seen that value and that senseassignment together in the data taking into account a pseudocount prior expressed through r this can also be viewed as a form of smoothinga similar approach is taken with regards to the prior probability pin this case however all layers must be considered here λl is the weight for the contribution of layer l and αl is the portion of the dirichlet prior for the sense distribution θ in the current layertreating each layer individually we integrate over the possible values of θ obtaining a similar countbased term where l indicates the number of elements in layer l assigned the sense si l indicates the number of elements in layer l ie the size of the layer and s the number of sensesto distribute the pseudo counts represented by α in a reasonable fashion among the layers we define αl l m α where m l l ie the total size of the instancethis distributes α according to the relative size of each layer in the instanceplacing these values in equation 4 we obtain the following msα putting it all together we arrive at the final update equation for the gibbs sampling note that when dealing with a single layer equation 8 collapses to where m indicates the number of elements in the context window assigned to sense sithis is identical to the update equation in the original wordbased lda modelthe sampling algorithm gives direct estimates of s for every context elementhowever in view of our task we are more interested in estimating θ the sensecontext distribution which can be obtained as in equation 7 but taking into account all sense assignments without removing assignment iour system labels each instance with the single most probable sensein this section we discuss our experimental setup for assessing the performance of the model presented abovewe give details on our training procedure describe our features and explain how our system output was evaluateddata in this work we focus solely on inducing senses for nouns since they constitute the largest portion of content wordsfor example nouns represent 45 of the content words in the british national corpusmoreover for many tasks and applications nouns are the most frequent and most important partofspeechfor evaluation we used the semeval2007 benchmark dataset released as part of the sense induction and discrimination task the dataset contains texts from the penn treebank ii corpus a collection of articles from the first half of the 1989 wall street journal it is handannotated with ontonotes senses and has 35 nounsthe average noun ambiguity is 39 with a high skew towards the predominant sensethis is not entirely surprising since ontonotes senses are less finegrained than wordnet senseswe used two corpora for training as we wanted to evaluate our models performance across different domainsthe british national corpus is a 100 million word collection of samples of written and spoken language from a wide range of sources including newspapers magazines books letters and school essays as well as spontaneous conversationsthis served as our outofdomain corpus and contained approximately 730 thousand instances of the 35 target nouns in the semeval lexical samplethe second indomain corpus was built from selected portions of the wall street journalwe used all articles from the years 198789 and 1994 to create a corpus of similar size to the bnc containing approximately 740 thousand instances of the target wordsadditionally we used the senseval 2 and 3 lexical sample data as development sets for experimenting with the hyperparameters of our model evaluation methodology agirre and soroa present two evaluation schemes for assessing sense induction methodsunder the first scheme the system output is compared to the gold standard using standard clustering evaluation metrics here no attempt is made to match the induced senses against the labels of the gold standardunder the second scheme the gold standard is partitioned into a test and training corpusthe latter is used to derive a mapping of the induced senses to the gold standard labelsthe mapping is then used to calculate the systems fscore on the test corpusunfortunately the first scheme failed to discriminate among participating systemsthe oneclusterperword baseline outperformed all systems except one which was only marginally betterthe scheme ignores the actual labeling and due to the dominance of the first sense in the data encourages a singlesense approach which is further amplified by the use of a coarsegrained sense inventoryfor the purposes of this work therefore we focused on the second evaluation schemehere most of the participating systems outperformed the mostfrequentsense baseline and the rest obtained only slightly lower scoresfeature space our experiments used a feature set designed to capture both immediate local context wider context and syntactic contextspecifically we experimented with six feature categories 10word window 5word window collocations word ngrams partofspeech ngrams and dependency relations these features have been widely adopted in various wsd algorithms in all cases we use the lemmatized version of the wordthe semeval workshop organizers provided a small amount of context for each instance this context as well as the text in the training corpora was parsed using rasp to extract partofspeech tags lemmas and dependency informationfor instances containing more than one occurrence of the target word we disambiguate the first occurrenceinstances which were not correctly recognized by the parser were automatically assigned to the largest sensecluster3model selection the framework presented in section 3 affords great flexibility in modeling the empirical datathis however entails that several parameters must be instantiatedmore precisely our model is conditioned on the dirichlet hyperparameters α and β and the number of senses s additional parameters include the number of iterations for the gibbs sampler and whether or not the layers are assigned different weightsour strategy in this paper is to fix α and β and explore the consequences of varying s the value for the α hyperparameter was set to 002this was optimized in an independent tuning experiment which used the senseval 2 and senseval 3 datasetswe experimented with α values ranging from 0005 to 1the β parameter was set to 01 this value is often considered optimal in ldarelated models for simplicity we used uniform weights for the layersthe gibbs sampler was run for 2000 iterationsdue to the randomized nature of the inference procedure all reported results are average scores over ten runsour experiments used the same number of senses for all the words since tuning this number individually for each word would be prohibitivewe experimented with values ranging from three to nine sensesfigure 3 shows the results obtained for different numbers of senses when the model is trained on the wsj and bnc corpora respectivelyhere we are using the optimal combination of layers for each system for the model trained on wsj performance peaks at four senses which is similar to the average ambiguity in the test datafor the model trained on the bnc however the best results are obtained using twice as many sensesusing fewer senses with the bnctrained system can result in a drop in accuracy of almost 2this is due to the shift in domainas the sensedivisions of the learning domain do not match those of the target domain finer granularity is required in order to encompass all the relevant distinctionstable 1 illustrates the senses inferred for the word drug when using the indomain and outofdomain corpora respectivelythe most probable words for each sense are also shownfirstly note that the model infers some plausible senses for drug on the wsj corpus sense 1 corresponds to the enforcement sense of drug sense 2 refers to medication sense 3 to the drug industry and sense 4 to drugs researchthe inferred senses for drug on the bnc are more fine grainedfor example the model finds distinct senses for medication and illegal substance it also finds a separate sense for drug dealing and enforcement because the bnc has a broader focus finer distinctions are needed to cover as many senses as possible that are relevant to the target domain layer analysis we next examine which individual feature categories are most informative in our sense induction taskwe also investigate whether their combination through our layered model yields performance improvementswe used 4 senses for the system trained on wsj and 8 for the system trained on the bnc table 2 shows the performance of our model when using only one layerthe layer composed of words cooccurring within a 10word window and representing wider topical information gives the highest scores on its ownit is followed by the 5 and 1 word windows which represent more immediate local contextpartofspeech ngrams and word ngrams on their own achieve lower scores largely due to overgeneralization and data sparseness respectivelythe lowestscoring single layer is the dependency layer with performance only slightly above the mostfrequentsense baseline dependency information is very informative when present but extremely sparsetable 2 also shows the results obtained when running the layered model with all but one of the layers as inputwe can use this information to determine the contribution of each layer by comparing to the combined model with all layers because we are dealing with multiple layers there is an element of overlap involvedtherefore each of the wordwindow layers despite relatively high informativeness on its own does not because as much damage when it is absent since the other layers compensate for the topical and local informationthe absence of the word ngram layer which provides specific local information does not make a great impact when the 1w and pg layers are presentfinally we can see that the extremely sparse dependency layer is detrimental to the multilayer model as a whole and its removal increases performancethe sparsity of the data in this layer means that there is often little information on which to base a decisionin these cases the layer contributes a closetouniform estimation of the sense distribution which confuses the combined modelother layer combinations obtained similar resultstable 2 shows the most informative two and three layer combinationsagain dependencies tend to decrease performanceon the other hand combining features that have similar performance on their own is beneficialwe obtain the best performance overall with a two layered model combining topical and local contextstable 3 replicates the same suite of experiments on the bnc corpusthe general trends are similarsome interesting differences are apparent howeverthe sparser layers notably word ngrams and dependencies fare comparatively worsethis is expected since the more precise local information is likely to vary strongly across domainseven when both domains refer to the same sense of a word it is likely to be used in a different immediate context and local contextual information learned in one domain will be less effective in the otheranother observable difference is that the combined model without the dependency layer does slightly better than each of the single layersthe 1wpg combination improves over its components which have similar individual performancefinally the best performing model on the bnc also combines two layers capturing wider and more local contextual information comparison to stateoftheart table 4 compares our model against the two best performing sense induction systems that participated in the semeval2007 competitionir2 performed sense induction using the information bottleneck algorithm whereas umnd2 used kmeans to cluster second order cooccurrence vectors associated with the target wordthese models and our own model significantly outperform the mostfrequentsense baseline our best system is significantly better than umnd2 and quantitatively better than ir2 although the difference is not statistically significantthis paper presents a novel bayesian approach to sense inductionwe formulated sense induction in a generative framework that describes how the contexts surrounding an ambiguous word might be generated on the basis of latent variablesour model incorporates features based on lexical information parts of speech and dependencies in a principled manner and outperforms stateoftheart systemscrucially the approach is not specific to the sense induction task and can be adapted for other applications where it is desirable to take multiple levels of information into accountfor example in document classification one could consider an accompanying image and its caption as possible additional layers to the main textin the future we hope to explore more rigorous parameter estimation techniquesgoldwater and griffiths describe a method for integrating hyperparameter estimation into the gibbs sampling procedure using a prior over possible valuessuch an approach could be adopted in our framework as well and extended to include the layer weighting parameters which have strong potential for improving the models performancein addition we could allow an infinite number of senses and use an infinite dirichlet model to automatically determine how many senses are optimalthis provides an elegant solution to the modelorder problem and eliminates the need for external clustervalidation methodsacknowledgments the authors acknowledge the support of epsrc we are grateful to sharon goldwater for her feedback on earlier versions of this work
E09-1013
bayesian word sense inductionsense induction seeks to automatically identify word senses directly from a corpusa key assumption underlying previous work is that the context surrounding an ambiguous word is indicative of its meaningsense induction is thus typically viewed as an unsupervised clustering problem where the aim is to partition a word contexts into different classes each representing a word senseour work places sense induction in a bayesian context by modeling the contexts of the ambiguous word as samples from a multinomial distribution over senses which are in turn characterized as distributions over wordsthe bayesian framework provides a principled way to incorporate a wide range of features beyond lexical cooccurrences and to systematically assess their utility on the sense induction taskthe proposed approach yields improvements over stateoftheart systems on a benchmark datasetour latent variable formulation serves as a foundation for more robust models of other linguistic phenomenawe extract pseudo documents from a 10word window centered on the corresponding word token for each word typewe combine different feature sets using a probabilistic word sense induction model and find that only some combinations produced an improved system
nonconcatenative finitestate morphology the last few years so called in general and morphology in particular have become widely accepted as paradigms for the computational treatment of morphology finitestate morphology appeals to the notion of a finitestate transducer which is simply a classical finitestate automaton whose transitions are labeled with pairs rather than with single symbols the automaton operates on a pair of tapes and advances over a given transition if the current symbols on the tapes match the pair on the transition one member of the pair of symbols on a transition can be the designated null symbol which we will write c when this appears the corresponding tape is not examined and it does not advance as the machine moves to the next state finitestate morphology originally arose out of a desire to provide ways of analyzing surface forms using grammars expressed in terms of systems of ordered rewriting rules kaplan and kay observed that finitestate transducers could be used to mimic a large class of rewriting rules possibly including all those for phonology the importance of came from two considerations first transducers are indifferent as to the direction in which they are applied in other words they can be used with equal facility to translate between tapes in either direction to accept or reject pairs of tapes or to generate pairs of tapes second a pair of transducers with one tape in common is equivalent to a single transducer operating on the remaining pair of tapes a simple algorithm exists for constructing the transition diagram for composite machine given those of the original pair by repeated application of this algorithm it is therefore possible to reduce a cascade of transducers each linked to the next by a common tape to a single transducer which accepts exactly the same pair of tapes as was accepted by the original cascade as a whole from these two facts together it follows that an arbitrary ordered set of rewriting rules can be modeled by a finitestate transducer which can be automatically constructed from them and which serves as well for analyzing surface forms as for generating them from underlying lexical strings a transducer obtained from an ordered set of in the way just outlined is a level the sense that mediates directly between lexical and surface forms without ever the intermediate forms would arise in the course of applying the original rules by one the term morphology is used a more restricted way to apply to a system in which no intermediate forms are posited even in the original grammatical formalism the writer of a grammar using a twolevel formalism never needs to think in terms of any representations other than the lexical and the surface ones what he does is to specify using one formalism or another a set of transducers each of which mediates directly between these forms and each of which restricts the allowable pairs of strings in some way the pairs that the system as a whole accepts are those are those that are rejected by none of the component transducers modulo certain assumptions about way in they interact whose details need not concern us once again there is a formal procedure that can be used to combine set transducers that make up such a system 2 into a single automaton with the same overall so that the final result indistinguishable form that obtained from a set of ordered rules however it is an advantage of parallel machines that they can be used with very little loss of efficiency without combining them in this way while it is not the purpose of this paper to explore the formal properties of finitestate transducers a brief excursion may be in order at this point to forestall a possible objection to the claim that a parallel configuration of transducers can be combined into a single one on the face of it this cannot generally be so because there is generally no finitestate transducer that will accept the intersection of the sets of tape pairs accepted by an arbitrary set of transducers it is for example easy to design a transducer that will map a string of x onto the same number of y followed by an arbitrary number of z it is equally easy to design one that maps a string of x onto the same number of z preceded by an arbitrary number of x the intersection of these sets contains those pairs with some number of x on one tape and that same number of y followed by the same number of z on the other tape the set of second tapes therefore contains a contextfree language which it is clearly not within the power of any finitestate device to generate koskenniemi overcame this objection in his original work by adopting the view that all the transducers in the parallel configuration should share the same pair or readwrite heads the effect of this is to insist that they not only accept the same pairs of tapes but that they agree on the particular sequence of symbol pairs that must be rehearsed in the course of accepting each of them kaplan has been able to put a more formal construction on this in the following way let the empty symbols appearing in the pairs labeling any transition in the transducers be replaced by some ordinary symbol not otherwise part of the alphabet the new set of transducers derived in way clearly do not accept the same tapes as the original ones did but there is an algorithm for constructing a single finitestate transducer that will accept the intersection of the pairs they all accept suppose now that this configuration of parallel transducers is put in series with two other standard transducers one which carries the real empty symbol onto its surrogate and everything else onto itself and another transducer that carries the surrogate onto the real empty symbol then the resulting configuration accepts just the desired set of languages all of which are also acceptable by single transducers that can be algorithmically derived form the originals it may well appear that the systems we have been considering properly belong to finitestate phonology or graphology and not to morphology properly construed computational linguists have indeed often been guilty of some carelessness in their use of this terminology but it is not hard to see how it could have arisen the first step in any process that treats natural text is to recognize the words it contains and this generally involves analyzing each of them in terms of a constituent set of formatives of some kind most important among the difficulties that this entails are those having to do with the different shapes that formatives assume in different environments in other words the principal difficulties of morphological analysis are in fact phonological or graphological the inventor of twolevel morphology kimmo koskenniemi is fact provided a finitestate account not just of morphophonemics but also of morphotactics he took it that the allowable set of words simply constituted a regular set of morheme sequences this is probably the more controversial part of his proposal but it is also the less technically elaborate and therefore the one that has attracted less attention as a result the term quottwolevel morphologyquot has come to be commonly accepted as applying to any system of word recognition that involves twolevel finitestate phonology or graphology the approach to nonconcatenative morphology to be outlined in this paper will provide a more unified treatment of morphophonemics and morphotactics than has been usual 3 i shall attempt to show how a twolevel account might be given of nonconcatenative morphological phenomena particularly those exhibited in the semitic languages the approach i intend to take is inspired not only by finitestate morphology broadly construed but equally by autosegmental phonology as proposed by goldsmith and the autosegmental morphology of mccarthy all the data that i have used in this work is taken from mccarthy and my debt to him will be clear throughout forms that can be constructed on the basis of each of the stems shown however there is every reason to suppose that though longer and greatly more complex in detail that enterprise would not require essentially different mechanisms from the ones i shall describe the overall principles on which the material table is are clear from a fairly cursory inspection each form contains the letters quotktbquot somewhere in it this is the root of the verb meaning quotwritequot by replacing these three letters with other appropriately chosen perfective imperfective participle active passive active i katab kutib aktub ii kattab kuttib ukattib iii kaatab kuutib ukaatib iv aktab uktib youaktib v takattab tukuttib atakattab vi takaatab tukuutib atakaatab vii nkatab nkutib ankatib viii ktatab ktutib aktatib ix ktabab aktabib x staktab stuktib astaktib xi ktaabab aktaabib xii ktawtab aktawtib xiii ktawwab aktawwib xiv ktanbab aktanbib xv ktanbay aktanbiy passive maktuub mukattab mukaatab muaktab mutakattab mutakaatab munkatab muktatab muktabib mustaktab muktaabib muktawtib muktawwib muktanbib muktanbiy table i i take it as my task to describe how the members of a paradigm like the one in table i might be generated and recognized effectively and efficiently and in such a way as to capture profit from the linguistic generalizations inherent in it now this is a slightly artificial problem because the forms in table i are not in words but only verb stems to get the verb forms that would be found in arabic text we should have to expand the table very considerably to show the inflected sequences of three consonants we would obtain corresponding paradigms for other roots with some notable exceptions the columns of the table contain stems with the same sequence of vowels of these is known as a as the headings of the columns show these can serve to distinguish perfect from imperfective active from passive and the like each row of the table is characterized by a particular pattern according to which the vowels and consonants alternate in other words it is characteristic of a given row 4 that the vowel in a particular position is long or short or that a consonant is simple or geminate or that material in one syllable is repeated in the following one mccarthy refers to each of these as a template term which i shall take over each of them adds a particular semantic component to the basic verb making it reflexive causative or whatever our problem will therefore involve designing an abstract device capable of combining components of these three kinds into a single sequence our solution will take the form of a set of one or more finitestate transducers that will work in parallel like those of koskenniemmi but on four tapes rather than just two there will not be space in this paper to give a detailed account even of all the material in table i not to mention problems that would arise if we were to consider the full range of arabic roots what i do hope to do however is to establish a theoretical framework within which solutions to all of these problems could be developed we must presumably expect the transducers we construct to account for the arabic data to have transition functions from states and quadruples of symbols to states in other words we will be able to describe them with transition diagrams whose edges are labeled with a vector of four symbols when the automaton moves from one state to another each of the four tapes will advance over the symbol corresponding to it on the transition that sanctions the move shall allow myself extensions to this basic scheme which will enhance the perspicuity and economy of the formalism without changing its essential character in particular these extensions will leave us clearly within the domain of finitestate devices the extensions have to do with separating the process of reading or writing a symbol on a tape from advancing the tape to the next position the quadruples that label the transitions in the transducers we shall be constructing will be elements each consisting of two parts a symbol and an instruction concerning the movement of the tape i shall use the following notation for this a unadorned symbol will be read in the traditional way namely as requiring the tape on which that symbol appears to move to the next position as soon as it has been read or written if the symbol is shown in brackets on the other hand the tape will not advance and the quadruple specifying the next following transition will therefore clearly have to be one that specifies the same symbol for that tape since the symbol will still be under the readwrite head when that transition is taken with this convention it is natural to dispense with the e symbol in favor of the notation quot1quot that is an unspecified symbol over which the corresponding tape does not advance a symbol can also be written in braces in which case the corresponding tape will move if the symbol under the readwrite head is the last one on the tape this is intended to capture the of autosegmental morphology that is the principal according to which the last item in a string may be reused when required to fill several positions particular set of quadruples or made up of symbols with or without brackets or will constitute the the automata and the quotusefulquot alphabet must be the same for all the automata because none of them can move from one state to another unless the make an exactly parallel not surprisingly a considerable amount of information about the language is contained just in the constitution of the alphabet indeed a single machine with one state which all transitions both leave and enter will generate a nontrivial subset of the material in table i an example of the steps involved in generating a form that depends only minimally on information in a transducer is given in table the eight step are labeled for each one a box is shown enclosing the symbols currently under the readwrite heads the tapes move under the heads from the right and then continue to the left no symbols are shown to the right on the bottom tape because we are assuming that the operation chronicled in these diagrams is one in which a surface form is being 5 v t b 1 k t b ccvvcvc v vccv v c c a a al a 1 a ak t a t b t v a a vvcvc vccvc v v a ak tab v c vc vc vccvc a a a ak t a b k t k t b v c c v a c vc v vccvcvc a a i a k t a a ak t ab i b table ii generated the bottom tapethe one containing the surface formis therefore being written and it is for this reason that nothing appears to the right the other three tapes in the order shown contain the root the prosodic template and the vocalism to the right of the tapes the frame is shown which sanctions the move that will be made to advance from that position to the next no such frame is given for the last configuration for the obvious reason that this represents the end of the process move from to sanctioned by a frame in which the root consonant is ignored there must be a quotvquot on the template tape and an quotaquot in the current position of the vocalism however the vocalism tape will not move when the automata move to their next states finally there will be an quotaquot on the tape containing the surface form in summary given that the prosodie template calls for a vowel the next vowel in the vocalism has been copied to the surface nondeterministically the device predicts that this same contribution from the vocalism will also be required to fill a later position the move from to is sanctioned by a in which the is ignored the template requires a consonant and the frame accordingly specifies the same consonant on both the root and the surface tapes advancing both of them a parallel move differing only in the identity of the consonant is made from to the move from to is similar to that from to except that this time the vocalism tape does advance the nondeterministic prediction that is being made in this case is that there will be no further slots for the quotaquot to fill just what it is that makes this the quotrightquot move is a matter to which we shall return the move from to 6 differs from the previous two moves over root consonants in that the quotbquot is being quotspreadquot in other words the root tape does not move and this possibility is allowed on the specific grounds that it is the last symbol on the tape once again the automata are making a nondeterministic decision this time that there will be another consonant called for later by the prosodic template and which it will be possible to fill only if this last entry on the root tape does not move away the moves from to and from to are like those from to and to respectively just what is the force of the remark made from time to time in this commentary that a move is made these are all situations in which some other move was in fact open to the transducers but where the one displayed was carefully chosen to be the one that would lead to the correct result suppose that instead of leaving the root tape stationary in the move from to it had been allowed to advance using a frame parallel to the one used in the moves from to and to a frame which it is only reasonable to assume must exist for all consonants including quotbquot the move from to could still have been made in the same way but this would have led to a configuration in which a consonant was required by the prosodic template but none was available from the root a derivation cannot be allowed to count as complete until all tapes are exhausted so the automata have reached impasse we must assume that when this happens the automata are able to return to a preceding situation in which an essentially arbitrarily choice was made and try a different alternative indeed we must assume that a general backtracking strategy is in effect which ensures that all allowable sequences of choices are explored now consider the nondeterministic choice that was made in the move from to as contrasted with the one made under essentially circumstances from to the vocalism tape had advanced in the first of these situations but not in the second we should presumably have been able to generate the putative form quotaktibibquot which does not exist this can be excluded only if we assume that there is a transducer that disallows this sequence of events or if the frames available for quotiquot are not the same as those for quotaquot we are in fact making the latter assumption on the grounds that the vowel quotiquot occurs only in the final position of arabic verb stems now the forms in rows v of table i in each of these the middle consonant of the root is geminate in the surface this is not a result of spreading as we have described it spreading occurs with the last consonant of a root if the prosodic template for row ii is quotcvccvcquot how is that we do not get forms like quotkatbabquot and quotkutbibquot beside the ones shown this is a problem that is overcome in mccarthy autosegmental account only at considerable cost indeed is is a deficiency of that formalism that the only mechanisms available in it to account for gemination are as complex as they are given how common the phenomenon is within the framework proposed here gemination is provided for in a very natural way consider the following pair of frame schemata in and arbitrary consonant c 1 1 first of these the one that was used for the consonants in the above example except in the situation for the first occurrence of quotbquot where is was being spread into the final two consonantal positions of the form the second frame differs from this is two respects first the prosodic template contains the hitherto unused symbol quotgquot for quotgeminatequot and second the root tape is not advanced suppose now that the the prosodic template for forms like quotkattabquot is not quotcvccvcquot but quotcvgcvcquot it will be possible to discharge the quotgquot only if the root template does not advance so that the following quotcquot in the template can only because the same consonant to be into the word time the sequence quotgcquot in a prosodic template is therefore an idiom for consonant gemination 7 needless to say mccarthy work on which this paper is based is not interesting simply for the fact that he is able to achieve an adequate description of the data in table i but also for the claims he makes about the way that account extends to a wider class of phenomena thus achieving a measure of explanatory power in particular he claims that it extends to roots with two and four consonants consider in particular the following sets of forms ktanbab dhanraj kattab dahraj takattab tadahraj those in the second column are based on the root dhrj in the first column are the corresponding forms of ktb the similarity in the sets of corresponding forms is unmistakable they exhibit the same patterns of consonants and vowels differing only in that whereas some consonant appears twice in the forms in column one the consonantal slots are all occupied by different segments in the forms on the right for these purposes the quotnquot of the first pair of forms should be ignored since it is contributed by the prosodic template and not by the root consonantal slot in the prosodic template only in the case of the shorter form the structure of the second and third forms is equally straighforward but it is less easy to see how our machinery could account for them once again the template calls for four root consonants and where only three are provided one must do double duty but in this case the effect is achieved through gemination rather than spreading so that the gemination mechanism just outlined is presumably in play that mechanism makes no provision for gemination to be invoked only when needed to fill slots in the prosodic template that would otherwise remain empty if the mechanism were as just described and the triliteral forms were quotcvgcvcquot and quottvcvgcvcquot respectively then the quadriliteral forms would have to be generated on a different base it is in cases like this of which there in fact many that the finitestate transducers play a substantive role what is required in this case is a transducer that allows the root tape to remain stationary while the template tape moves over a quotgquot provided no spreading will be allowed to occur later to fill consonantal slots that would fig 1 a triliteral and a quadriliteral root otherwise he unclaimed extra consonants spread not geminate ggeminate 0simple no spread no spread no spread the first pair are exactly as one would expectthe final root consonant is spread to fill the final required then the first priority must he to let them occupy the slots marked with a quot0quot in the 8 template fig 1 shows a schema for the transition diagram of a transducer that has this effect i call it a quotschemaquot only because each of the edges shown does duty for a number of actual transitions the machine begins in the quotstartquot state and continues to return there so long as no frame is encountered involving a quotgquot on the template tape a quotgquot transition causes a nondeterministic choice if the root tape moves at the same time as the quotgquot is scanned the transducer goes into its quotnospreadquot state to which it continues to return so long as every move over a quotcquot on the prosodic tape is accompanied by a move over a consonant on the root tape in other words it must be possible to complete the process without spreading consonants the other alternative is that the transducer should enter the quotgeminatequot state over a transition over a in the template with the root tape remaining stationary the transitions at the quotgeminatequot state allow both spreading and nonspreading transitions in summary spreading can occur only if the transducer never leaves the quotstartquot state and there is no quotgquot in the template or there is a quotgquot on the template which does not trigger gemination a quotgquot can fail to trigger gemination only when the root contains enough consonants to fill all the requirements that the template makes for them one quadriliteral case remains to be accounted for namely the following ktaabab dharjaj according to the strategy just elaborated we should have expected the quadriliteral form to been quotdhaarajquot but this form contains a slot that is used for vowel lengthening with triliteral roots and as consonantal position for quadriliterals we must therefore presumably take it that the prosodic template for this form is something like quotccvxcvcquot where quotxquot is a segment but not specified as either vocalic or consonantal this much is in line with the proposal that mccarthy himself makes the question is when should be filled by a vowel and when by a consonant the data in table i is of course insufficient to answer question but a plausible answer that strongly suggests itself is that the quotxquot slot prefers a consonantal filler that would result in gemination if this is true then it is another case where the notion of gemination though not actually exemplified in the form plays a central role supposing that the analysis is correct the next question is how is it to be implemented the most appealing answer would be to make quotxquot the exact obverse of quotgquot when filled with a consonant in other words when a root consonant fills such a slot the root tape must advance so that the same consonant will no longer be to fill the next position the that the next root consonant would simply be a repetition of the current one would be excluded if we were to take over from autosegmental phonology and morphology some version of th contour principle 1979 which disallows repeated segments except in the prosodic template and in the surface string mccarthy points out the roots like smnn which appear to violate the ocp can invariably be reanalyzed as biliteral roots like sm and if this is done our analysis like his goes through the ocp does seem likely to cause some trouble when we come to treat one of the principal remaining problems namely that of the forms in row i of table i it turns out that the vowel that appears in the second syllable of these forms is not provided by the vocalism but by the root the vowel that appears in the perfect is generally different from the one that appears in the imperfect and four different pairs are possible pair that is used with given root is an idiosyncratic property of that root one possibility is therefore that we treat the traditional triliteral roots as consisting not simply of three consonants but as three consonants with a vowel intervening between the second and third for a total of four segments this flies in the face of traditional wisdom it also counter to one of the intuitions autosegmental phonology which would have it that particular phonological features can be represented on at most one lexical tier or tape the intuition is that these tiers or tapes each contain a record or a particular kind of 9 gesture the hearer point of view it is as though they contained a record of the signal received from a receptor that was attuned only to certain features if we wish to maintain this model there are presumably two alternatives open to us both involve assuming that roots are represented on at least two tapes in parallel with the consonants separate from the vowel according to one alternative the root vowel would be written on the same tape as the vocalism according to the other it would be on a tape of its own unfortunately neither alternative makes for a particularly happy solution no problem arises from the proposal that a given morpheme should in general be represented on more than one lexical tape however the idea that the vocalic material associated with a root should appear on a special tape reserved for it alone breaks the clean lines of the system as so far presented in two ways first it spearates material onto two tapes specifically the new one and the vocalism on purely lexical grounds having nothing to do with their phonetic or phonological constitution and this runs counter to the idea of tapes as records of activity on phonetically specialized receptors it is also at least slightly troublesome in that that newly introduced tape fills no function except in the generation of the first row of the table neither of these arguments is conclusive and they could diminish considerably in force as a wider range of data was considered representing the vocalic contribution of the root on the same tape as the vacalism would avoid both of these objections but would require that vocalic contribution to be recorded either before or after the vocalism itself since the root vowel affects the latter part of the root it seems reasonable that it should be positioned to the right notice however that this is the only instance in which we have had to make any assumptions about the relative ordering of the morphemes that contribute to a stem once again it may be possible to assemble further evidence reflecting on some such ordering but i do not see it in these data it is only right that i should point out the difficulty of accounting satisfactorily for the vocalic contribution of verbal roots it is only right that i should also point out that the autosegmental solution fares no better on this score resorting as it must to rules that access essentially nonphonological properties of the morphemes involved by insisting that what i called the a morpheme should by by definition be its only contribution to phonological processes i have cut myself off from such ex machina linguists in general and computational linguists in particular do well to employ finitestate devices wherever possible they are theoretically appealing because they are computational weak and best understood from a mathematical point of view they are computationally appealing because they make for simple elegant and highly efficient implementaions in this paper i hope i have shown how they can be applied to a problem in nonconcatenative morphology which seems initially to require heavier machinary in the last few years so called finitestate morphology in general and twolevel morphology in particular have become widely accepted as paradigms for the computational treatment of morphologyfinitestate morphology appeals to the notion of a finitestate transducer which is simply a classical finitestate automaton whose transitions are labeled with pairs rather than with single symbolsthe automaton operates on a pair of tapes and advances over a given transition if the current symbols on the tapes match the pair on the transitionone member of the pair of symbols on a transition can be the designated null symbol which we will write c when this appears the corresponding tape is not examined and it does not advance as the machine moves to the next statefinitestate morphology originally arose out of a desire to provide ways of analyzing surface forms using grammars expressed in terms of systems of ordered rewriting ruleskaplan and kay observed that finitestate transducers could be used to mimic a large class of rewriting rules possibly including all those required for phonologythe importance of this came from two considerationsfirst transducers are indifferent as to the direction in which they are appliedin other words they can be used with equal facility to translate between tapes in either direction to accept or reject pairs of tapes or to generate pairs of tapessecond a pair of transducers with one tape in common is equivalent to a single transducer operating on the remaining pair of tapesa simple algorithm exists for constructing the transition diagram for this composite machine given those of the original pairby repeated application of this algorithm it is therefore possible to reduce a cascade of transducers each linked to the next by a common tape to a single transducer which accepts exactly the same pair of tapes as was accepted by the original cascade as a wholefrom these two facts together it follows that an arbitrary ordered set of rewriting rules can be modeled by a finitestate transducer which can be automatically constructed from them and which serves as well for analyzing surface forms as for generating them from underlying lexical stringsa transducer obtained from an ordered set of rules in the way just outlined is a two level device in the sense that it mediates directly between lexical and surface forms without ever constructing the intermediate forms that would arise in the course of applying the original rules one by onethe term twolevel morphology however is used in a more restricted way to apply to a system in which no intermediate forms are posited even in the original grammatical formalismthe writer of a grammar using a twolevel formalism never needs to think in terms of any representations other than the lexical and the surface oneswhat he does is to specify using one formalism or another a set of transducers each of which mediates directly between these forms and each of which restricts the allowable pairs of strings in some waythe pairs that the system as a whole accepts are those are those that are rejected by none of the component transducers modulo certain assumptions about the precise way in which they interact whose details need not concern usonce again there is a formal procedure that can be used to combine the set of transducers that make up such a system into a single automaton with the same overall behavior so that the final result is indistinguishable form that obtained from a set of ordered ruleshowever it is an advantage of parallel machines that they can be used with very little loss of efficiency without combining them in this waywhile it is not the purpose of this paper to explore the formal properties of finitestate transducers a brief excursion may be in order at this point to forestall a possible objection to the claim that a parallel configuration of transducers can be combined into a single oneon the face of it this cannot generally be so because there is generally no finitestate transducer that will accept the intersection of the sets of tape pairs accepted by an arbitrary set of transducersit is for example easy to design a transducer that will map a string of x onto the same number of y followed by an arbitrary number of zit is equally easy to design one that maps a string of x onto the same number of z preceded by an arbitrary number of xthe intersection of these two sets contains just those pairs with some number of x on one tape and that same number of y followed by the same number of z on the other tapethe set of second tapes therefore contains a contextfree language which it is clearly not within the power of any finitestate device to generatekoskenniemi overcame this objection in his original work by adopting the view that all the transducers in the parallel configuration should share the same pair or readwrite headsthe effect of this is to insist that they not only accept the same pairs of tapes but that they agree on the particular sequence of symbol pairs that must be rehearsed in the course of accepting each of themkaplan has been able to put a more formal construction on this in the following way let the empty symbols appearing in the pairs labeling any transition in the transducers be replaced by some ordinary symbol not otherwise part of the alphabetthe new set of transducers derived in this way clearly do not accept the same pairs of tapes as the original ones did but there is an algorithm for constructing a single finitestate transducer that will accept the intersection of the pairs they all acceptsuppose now that this configuration of parallel transducers is put in series with two other standard transducers one which carries the real empty symbol onto its surrogate and everything else onto itself and another transducer that carries the surrogate onto the real empty symbol then the resulting configuration accepts just the desired set of languages all of which are also acceptable by single transducers that can be algorithmically derived form the originalsit may well appear that the systems we have been considering properly belong to finitestate phonology or graphology and not to morphology properly construedcomputational linguists have indeed often been guilty of some carelessness in their use of this terminologybut it is not hard to see how it could have arisenthe first step in any process that treats natural text is to recognize the words it contains and this generally involves analyzing each of them in terms of a constituent set of formatives of some kindmost important among the difficulties that this entails are those having to do with the different shapes that formatives assume in different environmentsin other words the principal difficulties of morphological analysis are in fact phonological or graphologicalthe inventor of twolevel morphology kimmo koskenniemi is fact provided a finitestate account not just of morphophonemics but also of morphotacticshe took it that the allowable set of words simply constituted a regular set of morheme sequencesthis is probably the more controversial part of his proposal but it is also the less technically elaborate and therefore the one that has attracted less attentionas a result the term quottwolevel morphologyquot has come to be commonly accepted as applying to any system of word recognition that involves twolevel finitestate phonology or graphologythe approach to nonconcatenative morphology to be outlined in this paper will provide a more unified treatment of morphophonemics and morphotactics than has been usual i shall attempt to show how a twolevel account might be given of nonconcatenative morphological phenomena particularly those exhibited in the semitic languagesthe approach i intend to take is inspired not only by finitestate morphology broadly construed but equally by autosegmental phonology as proposed by goldsmith and the autosegmental morphology of mccarthy all the data that i have used in this work is taken from mccarthy and my debt to him will be clear throughout forms that can be constructed on the basis of each of the stems shownhowever there is every reason to suppose that though longer and greatly more complex in detail that enterprise would not require essentially different mechanisms from the ones i shall describethe overall principles on which the material in table i is organized are clear from a fairly cursory inspection each form contains the letters quotktbquot somewhere in itthis is the root of the verb meaning quotwritequotby replacing these three letters with other appropriately chosen i take it as my task to describe how the members of a paradigm like the one in table i might be generated and recognized effectively and efficiently and in such a way as to capture and profit from the principal linguistic generalizations inherent in itnow this is a slightly artificial problem because the forms given in table i are not in fact words but only verb stemsto get the verb forms that would be found in arabic text we should have to expand the table very considerably to show the inflected sequences of three consonants we would obtain corresponding paradigms for other rootswith some notable exceptions the columns of the table contain stems with the same sequence of vowelseach of these is known as a vocalism and as the headings of the columns show these can serve to distinguish perfect from imperfective active from passive and the likeeach row of the table is characterized by a particular pattern according to which the vowels and consonants alternatein other words it is characteristic of a given row that the vowel in a particular position is long or short or that a consonant is simple or geminate or that material in one syllable is repeated in the following onemccarthy refers to each of these patterns as a prosodic template a term which i shall take overeach of them adds a particular semantic component to the basic verb making it reflexive causative or whateverour problem will therefore involve designing an abstract device capable of combining components of these three kinds into a single sequenceour solution will take the form of a set of one or more finitestate transducers that will work in parallel like those of koskenniemmi but on four tapes rather than just twothere will not be space in this paper to give a detailed account even of all the material in table i not to mention problems that would arise if we were to consider the full range of arabic rootswhat i do hope to do however is to establish a theoretical framework within which solutions to all of these problems could be developedwe must presumably expect the transducers we construct to account for the arabic data to have transition functions from states and quadruples of symbols to statesin other words we will be able to describe them with transition diagrams whose edges are labeled with a vector of four symbolswhen the automaton moves from one state to another each of the four tapes will advance over the symbol corresponding to it on the transition that sanctions the movei shall allow myself some extensions to this basic scheme which will enhance the perspicuity and economy of the formalism without changing its essential characterin particular these extensions will leave us clearly within the domain of finitestate devicesthe extensions have to do with separating the process of reading or writing a symbol on a tape from advancing the tape to the next positionthe quadruples that label the transitions in the transducers we shall be constructing will be elements each consisting of two parts a symbol and an instruction concerning the movement of the tape i shall use the following notation for thisa unadorned symbol will be read in the traditional way namely as requiring the tape on which that symbol appears to move to the next position as soon as it has been read or writtenif the symbol is shown in brackets on the other hand the tape will not advance and the quadruple specifying the next following transition will therefore clearly have to be one that specifies the same symbol for that tape since the symbol will still be under the readwrite head when that transition is takenwith this convention it is natural to dispense with the e symbol in favor of the notation quot1quot that is an unspecified symbol over which the corresponding tape does not advancea symbol can also be written in braces in which case the corresponding tape will move if the symbol under the readwrite head is the last one on the tapethis is intended to capture the notion of spreading from autosegmental morphology that is the principal according to which the last item in a string may be reused when required to fill several positionsa particular set of quadruples or frames made up of symbols with or without brackets or braces will constitute the alphabet of the automata and the quotusefulquot alphabet must be the same for all the automata because none of them can move from one state to another unless the others make an exactly parallel transitionnot surprisingly a considerable amount of information about the language is contained just in the constitution of the alphabetindeed a single machine with one state which all transitions both leave and enter will generate a nontrivial subset of the material in table ian example of the steps involved in generating a form that depends only minimally on information embodied in a transducer is given in table iithe eight step are labeled for each one a box is shown enclosing the symbols currently under the readwrite headsthe tapes move under the heads from the right and then continue to the left no symbols are shown to the right on the bottom tape because we are assuming that the operation chronicled in these diagrams is one in which a surface form is being generatedthe bottom tapethe one containing the surface formis therefore being written and it is for this reason that nothing appears to the rightthe other three tapes in the order shown contain the root the prosodic template and the vocalismto the right of the tapes the frame is shown which sanctions the move that will be made to advance from that position to the nextno such frame is given for the last configuration for the obvious reason that this represents the end of the processthe move from to is sanctioned by a frame in which the root consonant is ignoredthere must be a quotvquot on the template tape and an quotaquot in the current position of the vocalismhowever the vocalism tape will not move when the automata move to their next statesfinally there will be an quotaquot on the tape containing the surface formin summary given that the prosodie template calls for a vowel the next vowel in the vocalism has been copied to the surfacenondeterministically the device predicts that this same contribution from the vocalism will also be required to fill a later positionthe move from to is sanctioned by a frame in which the vocalism is ignoredthe template requires a consonant and the frame accordingly specifies the same consonant on both the root and the surface tapes advancing both of thema parallel move differing only in the identity of the consonant is made from to the move from to is similar to that from to except that this time the vocalism tape does advancethe nondeterministic prediction that is being made in this case is that there will be no further slots for the quotaquot to filljust what it is that makes this the quotrightquot move is a matter to which we shall returnthe move from to differs from the previous two moves over root consonants in that the quotbquot is being quotspreadquotin other words the root tape does not move and this possibility is allowed on the specific grounds that it is the last symbol on the tapeonce again the automata are making a nondeterministic decision this time that there will be another consonant called for later by the prosodic template and which it will be possible to fill only if this last entry on the root tape does not move awaythe moves from to and from to are like those from to and to respectivelyjust what is the force of the remark made from time to time in this commentary that a certain move is made nondeterministicallythese are all situations in which some other move was in fact open to the transducers but where the one displayed was carefully chosen to be the one that would lead to the correct resultsuppose that instead of leaving the root tape stationary in the move from to it had been allowed to advance using a frame parallel to the one used in the moves from to and to a frame which it is only reasonable to assume must exist for all consonants including quotbquotthe move from to could still have been made in the same way but this would have led to a configuration in which a consonant was required by the prosodic template but none was available from the roota derivation cannot be allowed to count as complete until all tapes are exhausted so the automata would have reached an impassewe must assume that when this happens the automata are able to return to a preceding situation in which an essentially arbitrarily choice was made and try a different alternativeindeed we must assume that a general backtracking strategy is in effect which ensures that all allowable sequences of choices are explorednow consider the nondeterministic choice that was made in the move from to as contrasted with the one made under essentially indistinguishable circumstances from to if the vocalism tape had advanced in the first of these situations but not in the second we should presumably have been able to generate the putative form quotaktibibquot which does not existthis can be excluded only if we assume that there is a transducer that disallows this sequence of events or if the frames available for quotiquot are not the same as those for quotaquotwe are in fact making the latter assumption on the grounds that the vowel quotiquot occurs only in the final position of arabic verb stemsconsider now the forms in rows ii and v of table iin each of these the middle consonant of the root is geminate in the surfacethis is not a result of spreading as we have described it because spreading only occurs with the last consonant of a rootif the prosodic template for row ii is quotcvccvcquot how is that we do not get forms like quotkatbabquot and quotkutbibquot beside the ones shownthis is a problem that is overcome in mccarthy autosegmental account only at considerable costindeed is is a deficiency of that formalism that the only mechanisms available in it to account for gemination are as complex as they are given how common the phenomenon iswithin the framework proposed here gemination is provided for in a very natural wayconsider the following pair of frame schemata in which c is and arbitrary consonant the first of these is the one that was used for the consonants in the above example except in the situation for the first occurrence of quotbquot where is was being spread into the final two consonantal positions of the formthe second frame differs from this is two respectsfirst the prosodic template contains the hitherto unused symbol quotgquot for quotgeminatequot and second the root tape is not advancedsuppose now that the the prosodic template for forms like quotkattabquot is not quotcvccvcquot but quotcvgcvcquotit will be possible to discharge the quotgquot only if the root template does not advance so that the following quotcquot in the template can only because the same consonant to be inserted into the word a second timethe sequence quotgcquot in a prosodic template is therefore an idiom for consonant geminationneedless to say mccarthy work on which this paper is based is not interesting simply for the fact that he is able to achieve an adequate description of the data in table i but also for the claims he makes about the way that account extends to a wider class of phenomena thus achieving a measure of explanatory powerin particular he claims that it extends to roots with two and four consonantsconsider in particular the following sets of forms ktanbab dhanraj kattab dahraj takattab tadahraj those in the second column are based on the root dhrjin the first column are the corresponding forms of ktbthe similarity in the sets of corresponding forms is unmistakablethey exhibit the same patterns of consonants and vowels differing only in that whereas some consonant appears twice in the forms in column one the consonantal slots are all occupied by different segments in the forms on the rightfor these purposes the quotnquot of the first pair of forms should be ignored since it is contributed by the prosodic template and not by the root consonantal slot in the prosodic template only in the case of the shorter formthe structure of the second and third forms is equally straighforward but it is less easy to see how our machinery could account for themonce again the template calls for four root consonants and where only three are provided one must do double dutybut in this case the effect is achieved through gemination rather than spreading so that the gemination mechanism just outlined is presumably in playthat mechanism makes no provision for gemination to be invoked only when needed to fill slots in the prosodic template that would otherwise remain emptyif the mechanism were as just described and the triliteral forms were quotcvgcvcquot and quottvcvgcvcquot respectively then the quadriliteral forms would have to be generated on a different baseit is in cases like this of which there in fact many that the finitestate transducers play a substantive rolewhat is required in this case is a transducer that allows the root tape to remain stationary while the template tape moves over a quotgquot provided no spreading will be allowed to occur later to fill consonantal slots that would given a triliteral and a quadriliteral root otherwise he unclaimedif extra consonants are the first pair are exactly as one would expectthe final root consonant is spread to fill the final required then the first priority must he to let them occupy the slots marked with a quot0quot in the templatefig1 shows a schema for the transition diagram of a transducer that has this effecti call it a quotschemaquot only because each of the edges shown does duty for a number of actual transitionsthe machine begins in the quotstartquot state and continues to return there so long as no frame is encountered involving a quotgquot on the template tapea quotgquot transition causes a nondeterministic choiceif the root tape moves at the same time as the quotgquot is scanned the transducer goes into its quotnospreadquot state to which it continues to return so long as every move over a quotcquot on the prosodic tape is accompanied by a move over a consonant on the root tapein other words it must be possible to complete the process without spreading consonantsthe other alternative is that the transducer should enter the quotgeminatequot state over a transition over a in the template with the root tape remaining stationarythe transitions at the quotgeminatequot state allow both spreading and nonspreading transitionsin summary spreading can occur only if the transducer never leaves the quotstartquot state and there is no quotgquot in the template or there is a quotgquot on the template which does not trigger geminationa quotgquot can fail to trigger gemination only when the root contains enough consonants to fill all the requirements that the template makes for themone quadriliteral case remains to be accounted for namely the following ktaabab dharjaj according to the strategy just elaborated we should have expected the quadriliteral form to have been quotdhaarajquotbut apparently this form contains a slot that is used for vowel lengthening with triliteral roots and as consonantal position for quadriliteralswe must therefore presumably take it that the prosodic template for this form is something like quotccvxcvcquot where quotxquot is a segment but not specified as either vocalic or consonantalthis much is in line with the proposal that mccarthy himself makes the question is when should be filled by a vowel and when by a consonantthe data in table i is of course insufficient to answer question but a plausible answer that strongly suggests itself is that the quotxquot slot prefers a consonantal filler except where that would result in geminationif this is true then it is another case where the notion of gemination though not actually exemplified in the form plays a central rolesupposing that the analysis is correct the next question is how is it to be implementedthe most appealing answer would be to make quotxquot the exact obverse of quotgquot when filled with a consonantin other words when a root consonant fills such a slot the root tape must advance so that the same consonant will no longer be available to fill the next positionthe possibility that the next root consonant would simply be a repetition of the current one would be excluded if we were to take over from autosegmental phonology and morphology some version of th obligatory contour principle which disallows repeated segments except in the prosodic template and in the surface stringmccarthy points out the roots like smnn which appear to violate the ocp can invariably be reanalyzed as biliteral roots like sm and if this is done our analysis like his goes throughthe ocp does seem likely to cause some trouble when we come to treat one of the principal remaining problems namely that of the forms in row i of table iit turns out that the vowel that appears in the second syllable of these forms is not provided by the vocalism but by the rootthe vowel that appears in the perfect is generally different from the one that appears in the imperfect and four different pairs are possiblethe pair that is used with a given root is an idiosyncratic property of that rootone possibility is therefore that we treat the traditional triliteral roots as consisting not simply of three consonants but as three consonants with a vowel intervening between the second and third for a total of four segmentsthis flies in the face of traditional wisdomit also runs counter to one of the motivating intuitions of autosegmental phonology which would have it that particular phonological features can be represented on at most one lexical tier or tapethe intuition is that these tiers or tapes each contain a record or a particular kind of articulatory gesture from the hearer point of view it is as though they contained a record of the signal received from a receptor that was attuned only to certain featuresif we wish to maintain this model there are presumably two alternatives open to usboth involve assuming that roots are represented on at least two tapes in parallel with the consonants separate from the vowelaccording to one alternative the root vowel would be written on the same tape as the vocalism according to the other it would be on a tape of its ownunfortunately neither alternative makes for a particularly happy solutionno problem arises from the proposal that a given morpheme should in general be represented on more than one lexical tapehowever the idea that the vocalic material associated with a root should appear on a special tape reserved for it alone breaks the clean lines of the system as so far presented in two waysfirst it spearates material onto two tapes specifically the new one and the vocalism on purely lexical grounds having nothing to do with their phonetic or phonological constitution and this runs counter to the idea of tapes as records of activity on phonetically specialized receptorsit is also at least slightly troublesome in that that newly introduced tape fills no function except in the generation of the first row of the tableneither of these arguments is conclusive and they could diminish considerably in force as a wider range of data was consideredrepresenting the vocalic contribution of the root on the same tape as the vacalism would avoid both of these objections but would require that vocalic contribution to be recorded either before or after the vocalism itselfsince the root vowel affects the latter part of the root it seems reasonable that it should be positioned to the rightnotice however that this is the only instance in which we have had to make any assumptions about the relative ordering of the morphemes that contribute to a stemonce again it may be possible to assemble further evidence reflecting on some such ordering but i do not see it in these datait is only right that i should point out the difficulty of accounting satisfactorily for the vocalic contribution of verbal rootsit is only right that i should also point out that the autosegmental solution fares no better on this score resorting as it must to rules that access essentially nonphonological properties of the morphemes involvedby insisting that what i have called the spelling of a morpheme should by by definition be its only contribution to phonological processes i have cut myself off from any such deus ex machinalinguists in general and computational linguists in particular do well to employ finitestate devices wherever possiblethey are theoretically appealing because they are computational weak and best understood from a mathematical point of viewthey are computationally appealing because they make for simple elegant and highly efficient implementaionsin this paper i hope i have shown how they can be applied to a problem in nonconcatenative morphology which seems initially to require heavier machinary
E87-1002
nonconcatenative finitestate morphologyinstead of modeling morphology along the more traditional finitestate transducer we suggest modeling it with a ntape automaton where tapes would carry precisely this interleaving that is called for in semitic interdigitationwe propose a framework with which each of the auto segmental tiers is assigned a tape in a multitape finite state machine with an additional tape for the surface form
inference in datr a declarative language for representing a restricted class of inheritance networks permitting both multiple and default inheritance the principal intended area of application is the representation of lexical entries for natural language processing and we use examples from this domain throughout in this paper we present the syntax and inference mechanisms for language the goal of the is the design of a simple language that has the necessary expressive power to encode the lexical entries presupposed by contemporary work in the unification grammar tradition can express all the evident generalizations about such entries has an explicit theory of inference is computationally tractable and has an explicit declarative semantics the present paper is primarily concerned with though the examples used may hint at our strategy in respect of and inheritance networks provide an intuitively appealing way of thinking about the representation of various kinds of knowledgethis fact has not gone unnoticed by a number of researchers working on lexical knowledge representation egde smedt flickinger et al calder te linden daelemans gazdar and calder however many such networks have been realized in the context of programming systems or programming languages that leave their precise meaning unclearin the light of braclunan ether ington and much other recent work it ha become apparent that the formal properties oi notations intended to represent inheritance arc highly problematicalthough not discussec here datr has a formal semantics for which some completeness anc soundness results have been derivedthese results and others the language consists of strings of symbols drawn from the se sym quot and the set atom and node all of which are disjointa string is in datr if it is sentence as defined by the following set 01 rules there are two kinds of sentence those containing and those containing both kinds have on their lefthand side a node path specification where a path is a sequence of atoms enclosed in pragmatically the 3 sentences are intended for defining the network whilst the statements express the values at individual nodesput another way the former provide the database definition language whilst the latter provide the query language the useful premises will standardly all be statements whilst the interesting theorems will standardly all be statements in view of this distinction we shall sometimes refer to sentences as definitional and 4 sentences as extensionalthroughout the examples in this paper we shall use bold for nodes and roman for atomsbold italic and italic will be used for corresponding metanotational variablesvariables such as n p lg and v will be assumed to be typed we shall sometimes refer to atoms occurring in paths as attributesthe righthand sides of extensional sentences are values that is simple atoms or lists of atomsnested lists enclosed in lists are provided to allow the components of complex values to be specified independently as an example the following sentences might be derivable from a lexical entry for english be likewise the following for german buch values are the principal results of a datr description the most typical operation is to determine the value associated with some nodepath pairthe righthand sides of definitional sentences are lvalues which can be simple atoms inheritance descriptors or lists of lvaluesan atom is primitive an inheritance descriptor specifies where the required value can be inherited from and lists allow arbitrary structures to be built as valuesinheritance descriptors come in several forms with two dimensions of variationthe unquotedquoted distinction specifies whether the inheritance context is local or global once the context is established the descriptor specifies a new node a new lpath or both to be used to determine the inherited valuefor example the following sentences might be found in a description of a lexicon for english finally an lpath is a path made up of lvalues that is elements which themselves may need evaluation as in this example quot quotquot quotquotwe adopt the following abbreviation convention for sets of sentences about a single nodedatr has seven syntactic rules of inference falling into three groupsthe first rule just provides us with a trivial route from definitional to extensional sentences note that v must be a value here otherwise the consequent would not be wellformedthe next three rules implement local inheritance of values and use the following additional metanotational device the expression e0 is wellformed iff eo el and e2 are lvalues and el occurs as a subexpression of eoin that case the expression denotes the result of substituting e2 for all occurrences of el in eorule ii says that if we have a theorem nlp1 l where l contains n2p2 as a subexpression and we also have a theorem n2p2 g then we can derive a theorem in which all occurrences of n2p2 in l are replaced by g in the simplest case this means that we can interpret a sentence of the form n1p1n2p2 as an inheritance specification meaning quotthe value of p1 at ni is inherited from p2 at n2quotso for example from rules iii and iv are similar but specify only a new node or path to inherit fromthe other component is unchanged that is it is the same as the corresponding component on the lefthandside of the rule specifying the inheritancein fact the following two sentence schemas are entirely equivalent rules ii iii and iv implement a local notion of inheritance in the sense that the new node or path specifications are interpreted in the current local contextthe three remaining inference rules implement a nonlocal notion of inheritance quoted descriptors specify values to be 68 interpreted in the context in which the original query was made rather than the current contextto see how the operation of these rules differs from the earlier unquoted cases consider the following theory the intention here is that the cat node expresses the generalisation that by default plural is the same as singular v and al inherit this but a2 while inheriting its plural form from al has an exceptional singular form overriding inheritance from cat now from this theory we can derive all the following theorems concerning plural and the following theorem concerning singular a2 enbut we cannot derive a theorem for v for examplethis is because v inherits from cat which inherits from cat which is not definedwhat we wanted was for cat to inherit from v that is from the global initial contextto achieve this we change the cat definition to be cat quotquotnow we find that we can still derive the same plural theorems but now in addition we get all these theorems concerning singular for example the derivation for the first of these is as follows finally given a set of sentences t we define the ruleclosure of 7 rc1 to be the closure of t under finite application of the above inference rules in the conventional fashionin addition to the conventional inference defined above datr has a nonmonotonic notion of inference by default each definitional sentence about some nodepath combination implicitly determines additional sentences about all the extensions to the path at that node for which no more specific definitional sentence exists in the theoryour overall approach follows moore whose treatment of inferences from sets of beliefs can be viewed more generally as a technique for providing a semantics for a declarative notion of inference by default we begin with some auxiliary definitionsthe expression paq where p and q are paths denotes the path formed by concatenating components of p and qa path p2 is an extension of a path p1 iff there is a path q such that p2 p1aqp2 is a strict extension if q is nonemptywe also use the a operator to denote extension of all the paths in a datr sentence as in the following examples given a sentence s we define the root of s to be the nodepath expression appearing to the left of the equality in s the root does not correspond to any syntactic category defined above it is simply a substring of the sentencegiven a set of sentences in datr t a node n and a path p we say np is specified in q if t contains a definitional sentence s whose root is np let ni p1 ni p2 be such that ni p1 is specified in t we say nlp2 is connected to ni p1 if there is no strict extension p3 of p1 of which p2 is an extension such that n1p3 is specified in t so ni p2 is connected to ni p1 if pi is the maximal subpath of p2 that is specified in t now given a set of sentences t define the path closure pcl of t to be pcl it is clear from these definitions that any np is connected to itself and thus that t is always a subset of pdthe path closure contains all those theorems which can be inferred by default from t to illustrate path closure consider the following example theory the situation is slightly more complicated with sentences that have paths on their righthand sidessuch paths are also extended by the subpath used to extend the lefthand sideso the sentence might give rise to sentences such as a2 quotalxplur fern nomquotusing default inference the example theory we used to illustrate global inference can be phrased more succinctly in this version we state that anything not specifically mentioned for v is inherited from cat whereas before we had to list cases explicitlysimilarly al inherits by default from cat and a2 from althe operation of path closure is nonmonotonic if we add more sentences to our original theory some of our derived sentences may cease to be truethe two forms of inference in datr are combined by taking the path closure of a theory first and then applying the inference rules to the resultin other words given a theory qc and a sentence s s is provable from t if rd 70evans work was supported by a grant from the sercgazdar work was supported by grants from the esrc and sercwe are grateful to our referees and to jon cunningham walter daelemans david israel bill keller tom khabaza ewan klein bob moore fernando pereira allan ramsay and chris thornton for clarifying our thinking about aspects of datr
E89-1009
inference in datrdatr is a declarative language for representing a restricted class of inheritance networks permitting both multiple and default inheritancethe principal intended area of application is the representation of lexical entries for natural language processing and we use examples from this domain throughoutin this paper we present the syntax and inference mechanisms for the languagethe goal of the datr enterprise is the design of a simple language that has the necessary expressive power to encode the lexical entries presupposed by contemporary work in the unification grammar tradition can express all the evident generalizations about such entries has an explicit theory of inference is computationally tractable and has an explicit declarative semanticsthe present paper is primarily concerned with though the examples used may hint at our strategy in respect of and we introduce datr a formal language for representing lexical knowledge
translation by structural correspondences we sketch and illustrate an approach to machine translation that exploits the potential of simultaneous correspondences between separate levels of linguistic representation as in the of codescriptions the approach is illustrated with examples from english german and french where the source and the target language sentence show noteworthy differences in linguistic analysis in this paper we sketch an approach to machine translation that offers several advantages compared to many of the other strategies currently being pursuedwe define the relationship between the linguistic structures of the source and target languages in terms of a set of correspondence functions instead of providing derivational or procedural techniques for converting source into targetthis approach permits the mapping between source and target to depend on information from various levels of linguistic abstraction while still preserving the modularity of linguistic components and of source and target grammars and lexiconsour conceptual framework depends on notions of structure structural description and structural correspondencein the following sections we outline these basic notions and show how they can be used to deal with certain interesting translation problems in a simple and straightforward wayin its emphasis on descriptionbased techniques our approach shares some fundamental features with the one proposed by kay but we use an explicit projection mechanism to separate out and organize the intra and interlanguage componentsmost existing translation systems are either transferbased or interlinguabased transferbased systems usually specify a single level of representation or abstraction at which transfer is supposed to take placea source string is analyzed into a structure at that level of representation a transfer program then converts this into a target structure at the same level and the target string is then generated from this structure interlinguabased systems on the other hanc require that a source string has to be analyzec into a structure that is identical to a structure from which a target string has to be generatedwithout further constraints each of these approaches could in principle be successfular interlingual representation could be devised for example to contain whatever informatior is needed to make all the appropriat are attractive because the provide structures with welldefined anour approach uses the equality and descriptionbased mechanisms of lexicalfunctional grammaras introduced by kaplan and bresnan lexicalfunctional grammar assigns to every sentence two levels of syntactic representation a constituent structure and a functional structure these structures are of different formal typesthe cstructure is a phrasestructure tree while the fstructure is a hierarchical finite functionand they characterize different aspects of the information carried by the sentencethe cstructure represents the ordered arrangement of words and phrases in the sentence while the fstructure explicitly marks its grammatical functions for each type of structure there is a special notation or descriptionlanguage in which the properties of desirable instances of that type can be specifiedconstituent structures are described by standard contextfree rule notation while fstructures are described by boolean combinations of functionargument equalities stated over variables that denote the structures of interestkaplan and bresnan assumed a correspondence function mapping between the nodes in the cstructure of a sentence and the units of its fstructure and used that piecewise function to produce a description of the fstructure by virtue of the motherdaughter order and category relations of the cstructurethe formal picture developed by kaplan and bresnan as clarified in kaplan is illustrated in the following structures for sentence the cstructure appears on the left the fstructure on the rightthe cstructuretofstructure correspondence 4 is shown by the linking linesthe correspondence 4 is a manytoone function taking the s vp and v nodes all into the same outermost unit of the fstucture the nodeconfiguration at the top of the tree satisfies the statement s 0 np vp in the contextfree description language for the cstructureas suggested by kaplan this is a simple way of defining a collection of more specific properties of the tree such as the fact that the s node is the mother of the np node these facts could also be written in equational form as m where m denotes the function that takes a treenode into its mothersimilarly the outermost fstructure satisfies the assertions past sg in the fstructure description languagegiven the illustrated correspondence we also know that fi 4 and f24taking all these propositions together we can infer first that 4 and then that subj4this equation identifies the subject in the fstructure in terms of the motherdaughter relation in the treein lfg the fstructure assigned to a sentence is the smallest one that satisfies the conjunction of equations in its functional descriptionthe functional description is determined from the trees that the cstructure grammar provides for the string by a simple matching processa given tree is analyzed with respect to the cstructure rules to identify particular nodes of interestequations about the fstructure corresponding to those nodes are then derived by substituting those nodes into equationpatterns or schematathus still following kaplan if appears in a schema to stand for the node matching a given rulecategory the functional description will include an equation containing that node instead of the equation subj4 that we inferred above also results from instantiating the schema sui3j 4 annotated to the np element of the s rule in when that ruleelement is matched against the tree in kaplan observes that the t and metavariables in the kaplanbresnan formulation of lfg are simply convenient abbreviations for the complex expressions 4 and 4 respectively thus explicating the traditional more palatable formulation in this basic conception of descriptions and correspondences has been extended in several waysfirst this framework has been generalized to additional kinds of structures that represent other subsystems of linguistic information these structures can be related by new correspondences that permit appropriate descriptions of more abstract structures to be producedhalvorsen and kaplan for example discuss a level of semantic structure that encodes predicateargument relations and quantifier scope information that does not enter into the kinds of syntactic generalizations that the fstructure supportsthey point out how the semantic structure can be set in correspondence with both cstructure and fstructure units by means of related mappings a and akaplan raises the possibility of further distinct structures and correspondences to represent anaphoric dependencies discourse properties of sentences and other projections of the same stringsecond kaplan and halvorsen and kaplan discuss other methods for deriving the descriptions necessary to determine these abstract structuresthe arrangement outlined above in which the description of one kind of structure is derived by analyzing or matching against another one is an example of what is called descriptionbyanalysisthe semantic interpretation mechanisms proposed by halvorsen and reyle are other examples of this descriptive techniquein this method the grammar provides general patterns to compare against a given structure and these are then instantiated if the analysis is satisfactoryone consequence of this approach is that the structure in the range of the correspondence the one whose description is being developed can only have properties that are derived from information explicitly identified in the domain structureanother description mechanism is possible when three or more structures are related through correspondencessuppose the cstructure and fstructure are related by 4 as in and that the function a then maps the fstructure units into corresponding units of semantic structure of the sort suggested by fenstad et al the formal arrangement is shown in figure 1 this configuration of cascaded correspondences opens up a new descriptive possibilityif a and 4 are both structural correspondences then so is their composition a 0 4thus even though the units of the semantic structure correspond directly only to the units of the fstructure and have no immediate connection to the nodes of the cstructure a semantic description can be formulated in terms of cstructure relationsthe expression a can appear on a cstructure ruleelement to designate the semanticstructure unit corresponding to the fstructure that corresponds to the mother of the node that matches that ruleelementsince projections are monadic functions we can remove the uninformative parentheses and write m subj or using the metavariable schemata such as this can be freely mixed with lfg standard functional specifications in lexical entries and cstructure rulesfor example the lexical entry for fall might be given as follows descriptions formulated by composing separate correspondences have a surprising characteristic they allow the final range structure to have properties that cannot be inferred from any information present in the intermediate structurebut those properties can obtain only if the intermediate structure is derived from an initial structure with certain featuresfor example kaplan and maxwell exploit this capability to describe semantic structures for coordinate constructions which necessarily contain the logical conjunction appropriate to the string even though there is no reasonable place for that conjunction to be marked in the fstructurein sum this method of description which has been called codescription permits information from a variety of different levels to constrain a particular structure even though there are no direct correspondences linking them togetherit provides for modularity of basic relationships while allowing certain necessary restrictions to have their influencethe descriptive architecture of lfg as extended by kaplan and halvorsen provides for multiple levels of structure to be related by separate correspondences and these correspondences allow descriptions of the various structures to be constructed either by analysis or composition from the properties of other structuresearlier researchers have applied these mechanisms to the linguistic structures for sentences in a single languagein this paper we extend this system one step further we introduce correspondences between structures for sentences in different languages that stand in a translation relation to one anotherthe description of the target language structures are derived via analysis and codescription from the source language structures by virtue of additional annotations in cstructure rules and lexical entriesthose descriptions are solved to find satisfying solutions and these solutions are then the input to the target generation processin the two language arrangement sketched below we introduce the v correspondence to map between the fstructure units of the source language and the fstructure units of the target languagethe a correspondence maps from the fstructure of each language to its own corresponding semantic structure and a second transfer correspondence 1 relates those structuresthis arrangement allows us to describe the target fstructure by composing 4 or simply t this maps a comp in the source fstructure into an xcomp in the target fstructurethe relations asserted by this equation are depicted in the following sourcetarget diagram as another example the equation quote t argo identifies the first arguments in the source and target semantic structuresthe equation ta 43 imposes the constraint that the semantics of the source subj will translate via t into the semantics of the target topic but gives no further information about what those semantic structures actually containour general correspondence architecture thus applies naturally to the problem of translationbut there are constraints on correspondences specific to translation that this general architecture does not addressfor instance the description of the targetlanguage structures derived from the sourcelanguage is incompletethe target structures may and usually will have grammatical and semantic features that are not determined by the sourceit makes little sense for example to include information about grammatical gender in the transfer process if this feature is exhaustively determined by the grammar of the target languagewe can formalize the relation between the information contained in the transfer component and an adequate translation of the source sentence into a target sentence as follows for a target sentence to be an adequate translation of a given source sentence it must be the case that a minimal structure assigned to that sentence by the target grammar is subsumed by a minimal solution to the transfer descriptionone desirable consequence of this formalization is that it permits two distinct target strings for a source string whose meaning in the absence of other information is vague but not ambiguousthus this conceptual and notational framework provides a powerful and flexible system for imposing constraints on the form of a target sentence by relating them to information that appears at different levels of sourcelanguage abstractionthis apparatus allows us to avoid many of the problems encountered by more derivational transformational or procedural models of transferwe will illustrate our proposal with examples that have posed challenges for some other approacheschanges in grammatical functionsome quite trivial changes in structure occur when the source and the target predicate differ in the grammatical functions that they subcategorize forwe will illustrate this with an example in which a german transitive verb is translated with an intransitive verb taking an oblique complement in french we treat the oblique preposition as a pred that itself takes an objectignoring information about tense the lexical entry for beantworten in the german lexicon looks as follows we use the special attribute fn to designate the functionname in semantic forms such as beantworten in this transfer equation it identifies repondre as the corresponding french predicatethis specification controls lexical selection in the target for example selecting the following french lexical entry to be used in the translation with these entries and the appropriate but trivial entries for der student and die frage we get the following fstructure in the source language and associated fstructure in the target language for the sentence in in the previous example the effects of the change in grammatical function between the source and the target language are purely localin other cases there is a nonlocal dependency between the subcategorizing verb and a dislocated phrasethis is illustrated by the relative clause in the letter that the student seemed to answerthe withinclause functions of the relativized phrases in the source and target language are determined by predicates which may be arbitrarily deeply embedded but the relativized phrase in the target language must correspond to the one in the source languagelet us assume that relative clauses can be analyzed by the following slightly simplified phrase structure rules making use of functional uncertainty to capture the nonlocal dependency of the relativized phrase the second structure is the fstructure the grammar of french assigns to the sentence in this fstructure is the input for the generation processother examples of this kind are pairs like like and plaire and help and helfenwe can achieve the desired correspondence between the source and the target by augmenting the first rule with the following transfer equations the effect of this rule is that the i value of the relativized phrase in the source language is identified with the relativized phrase in the target languagehowever the source reltopic is also identified with a withinclause function say 0i3j by the uncertainty equation in lexical transfer rules such as the one given in independently establish the correspondence between source and target withinclause functionsthus the target withinclause function will be identified with the target relativized phrasethis necessary relation is accomplished by lexically and structurally based transfer rules that do not make reference to each otherdifferences in controla slightly more complex but similar case arises when the infinitival complement of a raising verb is translated into a finite clause as in the following in this case the necessary information is distributed in the following way over the source target and transfer lexicons as shown in figure 2here the transfer projection builds up an underspecified target structure to which the information given in the entry of probable is added in the process of generationignoring the contribution of is the fstructure for the english sentence identifies the nonthematic subj of likely with the thematic subj of work as follows the corresponding french structure in contains an expletive subj il for probable and an overtly expressed subj for travaillerthe latter is introduced by the transfer entry for again this fstructure satisfies the transfer description and is also assigned by the french grammar to the target sentencethe use of multiple projectionsthere is one detail about the example in that needs further discussionsimplifying matters somewhat there is a requirement that the temporal reference point of the complement has to follow the temporal reference point of the clause containing likely if the embedded verb is a process verbbasically the same temporal relations have to hold in french with probablethe way this is realized will depend on what the tense of probable is which in turn is determined by the discourse up to that pointa sentence similar to the one given in but appearing in a narrative in the past would translate as the following 278 likely it is realized in a different waythis can be expressed by the following equation ii etait probable que letudiant travailleraitin the general case the choice of a french tense does not depend on the tense of the english sentence alone but is also determined by information that is not part of the fstructure itselfwe postulate another projection the temporal structure reached from the fstructure through the correspondence x it is not possible to discuss here the specific characteristics of such a structurethe only thing that we want to express is the constraint that the event in the embedded clause follows the event in the main clausewe assume that the temporal structure contains the following information for likelytov as suggested by fenstad et al this is meant to indicate that the temporal reference point of the event denoted by the embedded verb extends after the temporal reference point of the main eventthe time of the main event is in part determined by the tense of the verb be which we ignore herethe only point we want to make is that aspects of these different projections can be specified in different parts of the grammarwe assume that french and english have the same temporal structure but that in the context of here the identity between x and xt provides an interlingualike approach to this particular subpart of the relation between the two languagesthis is diagrammed in figure 3allowing these different projections to simultaneously determine the surface structure seems at first blush to complicate the computational problem of generation but a moment of reflection will show that that is not necessarily soalthough we have split up the different equations among several projections for conceptual clarity computationally we can consider them to define one big attribute value structure with x and as special attributes so the generation problem in this framework reduces to the problem of generating from attributevalue structures which are formally of the same type as fstructures wedekind and momma and dorre for discussiondifferences in embeddingthe potential of the system can also be illustrated with a case in which we find one more level of embedding in one language than we find in the otherthis is generally the case if a modifierhead relation in the source language is reversed in the target structureone such example is the relation between the sentences in cpt 279 one way to encode this relation is given in the following lexical entry for just this assigns to just a semantic form that takes an arg function as its argument and maps it into the french venirthis lexical entry is combined with phrasestructure rule this rule introduces sentence adverbs and makes the fstructure corresponding to the s node fill the arg function in the fstructure corresponding to the acov nodenote that the fstructure of the am is not assigned a function within the snode fstructure which is shown in this is in keeping with the fact that the adverb has no functional interactions with the material in the main clausethe relation between the adverb and the clause is instead represented only in the fstructure associated with the am node in the original formulation of lfg the fstructure of the highest node was singled out and assigned a special statusin our current theory we do not distinguish that structure from all the others in the range of cp the grammatical analysis of a sentence includes the complete enumeration of 4associationsthe snode fstructure typically does contain the fstructures of all other nodes as subsidiary elements but not in this adverbial casethe target structures corresponding to the various fstructures are also not required to be integratedthese target fstructures can then be set in correspondence with any nodes of the target cstructure subject to the constraints imposed by the target grammarin this case the fact that venir takes an xcomp which corresponds to the arg of just means that the target fstructure mapped from the adv fstructure will be associated with the highest node of the target cstructurethis is shown in the above analysis does not require a single integrated source structure to map onto a single integrated target structurean alternative analysis can handle differences of embedding with completely integrated structuresif we assign an explicit function to the adverbial in the source sentence we can reverse the embedding in the target by replacing with in this case the embedded fstructure of the source adverb will be mapped onto the fstructure that corresponds to the root node of the target cstructure whereas the fstructure of the source s is mapped onto the embedded xcomp in the targetthe advantages and disadvantages of these different approaches will be investigated further in netter and wedekind we have sketched and illustrated an approach to machine translation that exploits the potential of simultaneous correspondences between different levels of linguistic representationthis is made possible by the equality and description based mechanisms oi lfgthis approach relies mainly on codescription and thus it is different from other lfgbased approaches that use g descriptionbyanalysis mechanism to relate the fstructure of a source language to the fstructure of a target language our proposal allows for partial specifications and multilevel transferin that sense it also differs from strategies pursued for example in the eurotra project where transfer is based on one level of representation obtained by transforming the surface structure in successive stepswe see it as one of the main advantages of our approach that it allows us to express correspondences between separate pieces of linguistically motivated representations and in this way allows the translator to exploit the linguistic descriptions of source and target language in a more direct way than is usually proposedthanks to pk halvorsen you heid h kamp m kay and c rohrer for discussion and comments
E89-1037
translation by structural correspondenceswe sketch and illustrate an approach to machine translation that exploits the potential of simultaneous correspondences between separate levels of linguistic representation as formalized in the lfg notion of codescriptionsthe approach is illustrated with examples from english german and french where the source and the target language sentence show noteworthy differences in linguistic analysisthe architecture can provide a formal basis for specifying complex sourcetarget translation relationships in a declarative fashion that builds on monolingual grammars and lexicons that are independently motivated and theoretically justified
named entity recognition without gazetteers it is often claimed that named entity recognition systems need extensive gazetteerslists of names of people organisations locations and other named entities indeed the compilation of such gazetteers is sometimes mentioned as a bottleneck in the design of named entity recognition systems we report on a named entity recognition system which combines rulebased grammars with statistical models we report on the system performance with gazetteers of different types and different sizes using test material from the muc7 competition we show that for the text type and task of this competition it is sufficient to use relatively small gazetteers of wellknown names rather than large gazetteers of lowfrequency names we conclude with observations about the domain independence of the competition and of our experiments named entity recognition involves processing a text and identifying certain occurrences of words or expressions as belonging to particular categories of named entities ne recognition software serves as an important preprocessing tool for tasks such as information extraction information retrieval and other text processing applicationswhat counts as a named entity depends on the application that makes use of the annotationsone such application is document retrieval or automated document forwarding documents annoted with ne information can be searched more now also at harlequin ltd accurately than raw textfor example ne annotation allows you to search for all texts that mention the company quotphilip morrisquot ignoring documents about a possibly unrelated person by the same nameor you can have all documents forwarded to you about a person called quotgatesquot without receiving documents about things called gatesin a document collection annotated with named entity information you can more easily find documents about java the programming language without getting documents about java the country or java the coffeemost common among marked categories are names of people organisations and locations as well as temporal and numeric expressionhere is an example of a text marked up with named entity information flavel donne is an analyst with general trends which has been based in little spring since july 1998in an article on the named entity recognition competition sundheim remarks that quotcommon organization names first names of people and location names can be handled by recourse to list lookup although there are drawbacksquot in fact participants in that competition from the university of durham and from sra report that gazetteers did not make that much of a difference to their systemnevertheless in a recent article cucchiarelli et al report that one of the bottlenecks in designing ne recognition systems is the limited availability of large gazetteers particularly gazetteers for different languages people also use gazetteers of very different sizesthe basic gazetteers in the isoquest system for muc7 contain 110000 names but krupka and hausman show that system performance does not degrade much when the proceedings of eacl 99 gazetteers are reduced to 25000 and 9000 names conversely they also show that the addition of an extra 42 entries to the gazetteers improves performance dramaticallythis raises several questions how important are gazetteers is it important that they are big if gazetteers are important but their size is not then what are the criteria for building gazetteersone might think that named entity recognition could be done by using lists of names of people places and organisations but that is not the caseto begin with the lists would be huge it is estimated that there are 15 million unique surnames just in the yousit is not feasible to list all possible surnames in the world in a named entity recognition systemthere is a similar problem with company namesa list of all current companies worldwide would be huge if at all available and would immediately be out of date since new companies are formed all the timein addition company names can occur in variations a list of company names might contain quotthe royal bank of scotland plcquot but that company might also be referred to as quotthe royal bank of scotlandquot quotthe royalquot or quotthe royal plcquotthese variations would all have to be listed as welleven if it was possible to list all possible organisations and locations and people there would still be the problem of overlaps between the listsnames such as emerson or washington could be names of people as well as places philip morris could be a person or an organisationin addition such lists would also contain words like quothopequot and quotlostquot and quotthinking machinesquot and quotnextquot whereas these words could also occur in contexts where they do not refer to named entitiesmoreover names of companies can be complex entities consisting of several wordsespecially where conjunctions are involved this can create problemsin quotchina international trust and investment corp decided to do somethingquot it is not obvious whether there is a reference here to one company or twoin the sentence quotmason daily and partners lost their court casequot it is clear that quotmason daily and partnersquot is the name of a companyin the sentence quotunfortunately daily and partners lost their court casequot the name of the company does not include the word quotunfortunatelyquot but it still includes the word quotdailyquot which is just as common a word as quotunfortunatelyquotin this paper we report on a named entity recognition system which was amongst the highest scoring in the recent muc7 message understanding conferencecompetition one of the features of our system is that even when it is run without any lists of names of organisations or people it still performs at a level comparable to that of many other mu csystemswe report on experiments which show the difference in performance between the ne system with gazetteers of different sizes for three types of named entities people organisations and locationsthe muc competition for which we built our system took place in march 1998prior to the competition participants received a detailed coding manual which specified what should and should not be marked up and how the markup should proceedthey also received a few hundred articles from the new york times service marked up by the organisers according to the rules of the coding manualfor the competition itself participants received 100 articlesthey then had 5 days to perform the chosen information extraction tasks without human intervention and markup the text with the named entities foundthe resulting marked up file then had to be returned to the organisers for scoringscoring of the results is done automatically by the organisersthe scoring software compares a participant answer file against a carefully prepared key file the key file is considered to be the quotcorrectlyquot annotated fileamongst many other things the scoring software calculates a system recall and precision scores recall number of correct tags in the answer file over total number of tags in the key fileprecision number of correct tags in the answer file over total number of tags in the answer filerecall and precision are generally accepted ways of measuring system performance in this fieldfor example suppose you have a text which is 1000 words long and 20 of these words express a locationnow imagine a system which assigns the location tag to every single word in the textthis system will have tagged correctly all 20 locations since it tagged everything as location its recall score is 2020 or 100but of the 1000 location tags it assigned only those 20 were correct its precision is therefore only 201000 or 2we decided first to test to what extent ne recognition can be carried out merely by recourse to list lookupsuch a system could be domain and language independentit would need no grammars or even information about tokenization but simply mark up known strings in the textof course the development and maintenance of the name lists would become more labour intensive evaluated the performance of such a minimal ne recognition system equipped with name lists derived from muc6 training textsthe system was tested on newswire texts for six languagesit achieved a recall rate of about 70 for chinese japanese and portuguese and about 40 for english and frenchthe precision of the system was not calculated but can be assumed to be quite high because it would only be affected by cases where a capitalized word occurs in more than one list or where a capitalised word occurs in a list but could also be something completely different we trained a similar minimal system using the muc 7 training data and ran it on the test data set the corpus we used in our experiments were the training and test corpora for the muc 7 evaluationfrom the training data we collected 1228 person names 809 names of organizations and 770 names of locationsthe resulting name lists were the only resource used by the minimal ne recognition systemit nevertheless achieved relatively high precision and recall in the range 4070the results are summarised in figure 1 in the quotlearned listsquot columndespite its simplicity this type of system does presuppose the existence of training texts and these are not always availableto cope with the absence of training material we designed and tested another variation of the minimal systeminstead of collecting lists from training texts we instead collected lists of commonly known entities we collected a list of 5000 locations from the cia world fact book a list of 33000 organization names from financial web sites and a list of 27000 famous people from several websitesthe results of this run can be seen in figure 1 in the quotcommon listsquot columnin essence this system performance was comparable to that of the system using lists from the training set as far as location was concerned it performed slightly worse on the person category and performed badly on organisationsin a final experiment we combined the two gazetteers the one induced from the training texts with the one acquired from public resources and achieved some improvement in recall at the expense of precisionthe results of this test run are given in the quotcombined listsquot column in figure 1we can conclude that the pure list lookup approach performs reasonably well for locations for the person category and especially for the organization category this approach does not yield good performance although the precision was not extremely bad recall was too low ie every second person name or organization failed to be assignedfor document retrieval purposes low recall is not necessarily a major problem since it is often sufficient to recognize just one occurrence of each distinctive entity per document and many of the unassigned person and organization names were just repetitions of their full variantsbut for many other applications and for the muc competition higher recall and precision are necessarythe system we fielded for muc7 makes extensive use of what mcdonald calls internal and external evidence in named entity recognitionthe basic philosophy underlying our approach is as followsa string of words like quotadam kluverquot has an internal structure which suggests that this is a person name but we know that it can also be used as a shortcut for a name of organization or location looking it up on a list will not necessarily help the string may not be on a list may be on more than one list or may be on the wrong listhowever somewhere in the text there is likely to be some contextual material which makes it clear what type of named entity it isour strategy is to only make a decision once we have identified this bit of contextual informationwe further assume that once we have identified contextual material which makes it clear that quotadam kluverquot is the name of a company then any other mention of quotadam kluverquot in that document is likely to refer to that companyif the author at some point in the same text also wants to refer to a person called quotadam kluverquot she will provide some extra context to make this clear and this context will be picked up in the first stepthe fact that at first it is only an assumption rather than a certainty that quotadam kluverquot is a company is represented explicitly and later processing components try to resolve the uncertaintyif no suitable context is found anywhere in the text to decide what sort of named entity quotadam kluverquot is the system can check other resources eg a list of known company names and apply compositional phrasal grammars for different categoriessuch grammars for instance can state that if a sequence of capitalized words ends with the word quotltdquot it is a name of organization or if a known first name is followed by an unknown capitalized word this is a person namein our muc system we implemented this approach as a staged combination of a rulebased system with probabilistic partial matchingwe describe each stage in turnin the first step the system applies surefire grammar rulesthese rules combine internal and external evidence and only fire when a possible candidate expression is surrounded by a suggestive contextsurefire rules rely on known corporate designators person titles and definite contexts such as those in figure 2the surefire rules apply after pos tagging and simple semantic tagging so at this stage words like quotformerquot have already been identified as jj words like quotanalystquot have been identified as prof and words like quotbrotherquot as rel at this stage our muc system treats information from the lists as likely rather than definite and always checks if the context is either suggestive or noncontradictivefor example a likely company name with a conjunction is left untagged at this stage if the company is not listed in a list of known companiessimilarly the system postpones the markup of unknown organizations whose name starts with a sentence initial common word as in quotsuspended ceiling contractors ltd denied the chargequotnames of possible locations found in our gazetteer of place names are marked as location only if they appear with a context that is suggestive of locationquotwashingtonquot for example can just as easily be a surname or the name of an organizationonly in a suggestive context like quotin washingtonquot will it be marked up as locationafter the surefire symbolic transduction the system performs a probabilistic partial match of the identified entitiesfirst the system collects all named entities already identified in the documentproceedings of eacl 99 it then generates all possible partial orders of the composing words preserving their order and marks them if found elsewhere in the textfor instance if quotadam kluver ltdquot had already been recognised as an organisation by the surefire rule in this second step any occurrences of quotkluver ltdquot quotadam ltdquot and quotadam kluverquot are also tagged as possible organizationsthis assignment however is not definite since some of these words could refer to a different entitythis information goes to a pretrained maximum entropy model for more details on this aproachthis model takes into account contextual information for named entities such as their position in the sentence whether they exist in lowercase in general whether they were used in lowercase elsewhere in the same document etcthese features are passed to the model as attributes of the partially matched wordsif the model provides a positive answer for a partial match the system makes a definite assignmentonce this has been done the system again applies the grammar rulesbut this time the rules have much more relaxed contextual constraints and extensively use the information from already existing markup and from the lexicon compiled during processing eg containing partial orders of already identified named entitiesat this stage the system will mark word sequences which look like person namesfor this it uses a grammar of names if the first capitalized word occurs in a list of first names and the following word are unknown capitalized words then this string can be tagged as a personnote that it is only at this late stage that a list of names is usedat this point we are no longer concerned that a person name can refer to a companyif the name grammar had applied earlier in the process it might erroneously have tagged quotadam kluverquot as a person instead of an organizationbut at this point in the chain of ne processing that is not a problem anymore quotadam kluverquot will by now already have been identified as an organization by the surefire rules or during partial matchingif it has not then it is likely to be the name of a personat this stage the system will also attempt to resolve conjunction problems in names of organisationsfor example in quotchina international trust and investment corpquot the system checks if possible parts of the conjunctions were used in the text on their own and thus are names of different organizations if not the system has no reason to assume that more than one company is being talked aboutin a similar vein the system resolves the attachment of sentence initial capitalized modifiers the problem alluded to above with the quotsuspended ceiling contractors ltdquot example if the modifier was seen with the organization name elsewhere in the text then the system has good evidence that the modifier is part of the company name if the modifier does not occur anywhere else in the text with the company name it is assumed not to be part of itthis strategy is also used for expressions like quotmurdoch news corpquotthe genitival quotmurdochquot could be part of the name of the organisation or could be a possessivefurther inspection of the text reveals that rupert murdoch is referred to in contexts which support a person interpretation and quotnews corpquot occurs on its own without the genitiveon the basis of evidence like this the system decides that the name of the organisation is quotnews corpquot and that quotmurdochquot should be tagged separately as a personat this stage known organizations and locations from the lists available to the system are marked in the text again without checking the context in which they occurat this point the system has exhausted its resources the system then performs another partial match to annotate names like quotwhitequot when quotjames whitequot had already been recognised as a person and to annotate company names like quothughesquot when quothughes communications ltdquot had already been identified as an organisationas in partial match 1 this process of partial matching is again followed by a probabilistic assignment supported by the maximum entropy modelfor example conjunction resolution makes use of the fact that in this type of text it is more common to have conjunctions of like entitiesin quothe works for xcx and yyyquot if there is evidence that xxx and yyy are two entities rather than one then it is more likely that xxx and yyy are two entities of the same type ie both organisations or are both people rather than a mix of the twothis means that even if only one of the entities in the conjunction has been recognised as definitely of a certain type the conjunction rule will help decide on the type of the other entityone of the texts in the competition contained the string quotu7ited states and russiaquotbecause of the typo in quotu7ited statesquot it was not found in a gazetteerbut there was internal evidence that it could be a location and there was external evidence that it could be a location these two facts in combination meant that the system correctly identified quotu7ited statesquot as a locationbecause titles of news wires are in capital letters they provide little guidance for the recognition of namesin the final stage of ne processing entities in the title are marked up by matching or partially matching the entities found in the text and checking against a maximum entropy model trained on document titlesfor example in quotgeneral trends analyst predicts little spring explosionquot quotgeneral trendsquot will be tagged as an organization because it partially matches quotgeneral trends incquot elsewhere in the text and quotlittle springquot will be tagged as a location because elsewhere in the text there is supporting evidence for this hypothesisin the headlinequotmurdochquot is correctly identified as a person because of mentions of rupert murdoch elsewhere in the textapplying a name grammar on this kind of headline without checking external evidence might result in erroneously tagging quotmurdoch satellitequot as a person in the muc competition our system combined precision and recall score was 9339this was the highest score better in a statistically significant way than the score of the next best systemscores varied from 9339 to 6967further details on this can be found in the table in figure 3 shows the progress of the performance of the system we fielded for the muc competition through the five stagesas one would expect the surefire rules give very high precision but very low recallin other words they do not find many named entities but the ones they find are correctsubsequent phases of processing add gradually more and more named entities but on occasion introduce errors our final score for organisation person and location is given in the bottom line of figure 3our system fielded for the muc competition made extensive use of gazetteers containing around 4900 names of countries and other place names some 30000 names of companies and other organisations and around 10000 first names of peopleas explained in the previous section these lists were used in a judicious way taking into account other internal and external evidence before making a decision about a named entityonly in step 3 is information from the gazetteers used without contextcheckingit is not immediately obvious from figure 3 what exactly the impact is of these gazetteersto try and answer this question we ran our system over 70 articles of the muc competition in different modes the remaining 30 articles were used to compile a limited gazetteer as described below and after that played no role in the experimentsfull gazetteerswe first ran the system again with the full gazetteers ie the gazetteers used in the official muc systemthere are minor differences in recall and precision compared to the official muc results due to the fact that we were using a slightly different corpusno gazetteerswe then ran the system without any gazetteersin this mode the system can still use internal evidence as well as external evidence the hypothesis was that names of organisations and names of people should still be handled relatively well by the system since they have much internal and external evidence whereas names of locations have fewer reliable contextual cluesfor example expressions such as quotxxx is based in yyyquot is not surefire evidence that yyy is a location it could also be an organisationand since many locations are so wellknown they receive very little extra context some locationswe then ran the system with some locational information about 200 names of countries and continents from www yahoo cornregional and because muc rules say explicitly that names of planets should be marked up as locations the names of the 8 planets of our solar systemthe hypothesis was that even with those reasonably common location names named entity recognition would already dramatically improvethis hypothesis was confirmed as can be seen in figure 4inspection of the errors confirms that the system makes most mistakes when there is no internal or external evidence to decide what sort of named entity is involvedfor example in a reference to quota hamburg hospitalquot quothamburgquot no longer gets marked up as a location because the word occurs nowhere else in the text and that context is not sufficient to assume it indicates a location similarly in a reference to quotthe bonn governmentquot quotbonnquot is no longer marked up as a location because of lack of supportive context and in financial newspaper articles nyse will be used without any indication that this is an organisation limited gazetteersthe results so far suggest that the most useful gazetteers are those that contain very common names names which the authors can expect their audience already to know about rather than farfetched examples of little known places or organisationsthis suggests that it should be possible to tune a system to the kinds of named entities that occur in its particular genre of textto test this hypothesis we wanted to know how the system would perform if it started with no gazetteers started processing texts then built up gazetteers as it goes along and then uses these gazetteers on a new set of texts in the same domainwe simulated these conditions by taking 30 of the 100 official mug articles and extracting all the names of people organisations and locations and using these as the only gazetteers thereby ensuring that we had extracted named entities from articles in the same domain as the test domainsince we wanted to test how easy it was to build gazetteers automatically we wanted to minimise the amount of processing done on named entities already foundwe decided to only used first names of people and marked them all as quotlikelyquot first names the fact that quotbillquot actually occurs as a first name does not guarantee it will definitely be a first name next time you see itcompany names found in the 30 articles were put in the company gazetteer irrespective of whether they were full company names names of locations found in the 30 texts were simply added to the list of 200 location names already used in the previous experimentsthe hope was that despite the little effort involved in building these limited gazetteers there would be an improved performance of the named entity recognition systemfigure 4 summarises the precision and recall results for each of these modes and confirms the hypothesesthe hypotheses were correct without gazetteers the system still scores in the high eighties for names of organisations and peoplelocations come out badlybut even with a very small number of country names performance for those named entities also goes up into the mideightiesand simple techniques for extending the gazetteers on the basis of a sample of just 30 articles already makes the system competitive againthese experiments suggest that the collection of gazetteers need not be a bottleneck through a judicious use of internal and external evidence relatively small gazetteers are sufficient to give good precision and recallin addition when collecting these gazetteers one can concentrate on the obvious examples of locations and organisations since these are exactly the ones that will be introduced in texts without much helpful contexthowever our experiments only show the usefulness of gazetteers on a particular type of text viz journalistic english with mixed casethe rules as well as the maximum entropy models make use of internal and external evidence in that type of text when trying to identify named entities and it is obvious that this system cannot be applied without modification to a different type of text eg scientific articleswithout further formal evaluations with externally supplied evaluation corpora it is difficult to judge how general this text type isit is encouraging to note that krupka and hausman point out that the muc7 articles which we used in our experiments have less external evidence than do wall street journal articles which suggests that on wall street journal articles our system might perform even better than on muc7 articlesthe work reported in this paper was supported in part by grant grl21952 from the engineering and physical sciences research council ukwe would like to thank steve finch and irina nazarova as well as colin matheson and other members of the language technology group for help in building various tools and other resources that were used in the development of the muc system
E99-1001
named entity recognition without gazetteersit is often claimed that named entity recognition systems need extensive gazetteers lists of names of people organisations locations and other named entitiesindeed the compilation of such gazetteers is sometimes mentioned as a bottleneck in the design of named entity recognition systemswe report on a named entity recognition system which combines rulebased grammars with statistical modelswe report on the system performance with gazetteers of different types and different sizes using test material from the muc7 competitionwe show that for the text type and task of this competition it is sufficient to use relatively small gazetteers of wellknown names rather than large gazetteers of lowfrequency nameswe conclude with observations about the domain independence of the competition and of our experimentswe utilize the discourse level to disambiguate items in non predictive contextswe exploit label consistency information within a document using relatively ad hoc multistage labeling procedures
an efficient method for determining bilingual word classes in statistical natural language processing we always face the problem of sparse data one way to reduce this problem is to group words into equivalence classes which is a standard method in statistical language modeling in this paper we describe a method to determine bilingual word classes suitable for statistical machine translation we develop an optimization criterion based on a maximumlikelihood approach and describe a clustering algorithm we will show that the usage of the bilingual word classes we get can improve statistical machine translation word classes are often used in language modelling to solve the problem of sparse datavarious clustering techniques have been proposed which perform automatic word clustering optimizing a maximumlikelihood criterion with iterative clustering algorithmsin the field of statistical machine translation we also face the problem of sparse dataour aim is to use word classes in statistical machine translation to allow for more robust statistical translation modelsa naive approach for doing this would be the use of monolingually optimized word classes in source and target languageunfortunately we can not expect these independently optimized classes to be correspondenttherefore monolingually optimized word classes do not seem to be useful for machine translation we define bilingual word clustering as the process of forming corresponding word classes suitable for machine translation purposes for a pair of languages using a parallel training corpusthe described method to determine bilingual word classes is an extension and improvement of the method mentioned in our approach is simpler and computationally more efficient than the task of a statistical language model is to estimate the probability pr of a sequence of words wiv wi wna simple approximation of pr is to model it as a product of bigram probabilities pr hin_ pif we want to estimate the bigram probabilities p using a realistic natural language corpus we are faced with the problem that most of the bigrams are rarely seenone possibility to solve this problem is to partition the set of all words into equivalence classesthe function c maps words w to their classes crewriting the corpus probability using classes we arrive at the following probability model p in this model we have two types of probabilities the transition probability p for class c given its predecessor class c and the membership probability p for word w given class c to determine the optimal classes c for a given number of classes m we perform a maximumlikelihood approach arg mrc p we estimate the probabilities of eq by relative frequencies p nin p ninthe function n provides the frequency of a uni or bigram in the training corpusif we insert this into eq and apply the negative logarithm and change the summation order we arrive at the following optimization proceedings of eacl 99 criterion lp the function h is a shortcut for n logit is necessary to fix the number of classes in c in advance as the optimum is reached if every word is a class of its ownbecause of this it is necessary to perform an additional optimization process which determines the number of classesthe use of leavingoneout in a modified optimization criterion as in could in principle solve this probleman efficient optimization algorithm for lpi is described in section 4in bilingual word clustering we are interested in classes f and e which form partitions of the vocabulary of two languagesto perform bilingual word clustering we use a maximumlikelihood approach as in the monolingual casewe maximize the joint probability of a bilingual training corpus to perform the maximization of eq we have to model the monolingual a priori probability p and the translation probability pfor the first we use the classbased bigram probability from eqto model p we assume the existence of an alignment afwe assume that every word fj is produced by the word ea at position a3 in the training corpus with the probability p the word alignment ail is trained automatically using statistical translation models as described in the idea is to introduce the unknown alignment a as hidden variable into a statistical model of the translation probability pby applying the emalgorithm we obtain the model parametersthe alignment cif that we use is the viterbialignment of an hmm alignment model similar to by rewriting the translation probability using word classes we obtain the variables f and e denote special classes in and e we use relative frequencies to estimate p and p the function nt counts how often the words in class f are aligned to words in class e if we insert these relative frequencies into eq and apply the same transformations as in the monolingual case we obtain a similar optimization criterion for the translation probability part of eqthus the full optimization criterion for bilingual word classes is the two count functions n and nt can be combined into one count function ng n nt as for all words f and all words e and e holds n 0 and nt 0using the function n9 we arrive at the following optimization criterion here we defined ngi ex ng and n92 ex n9the variable x runs over the classes in and f in the optimization process it cannot be allowed that words of different languages occur in one classit can be seen that eq is a special case of eq with ng1 n92another possibility to perform bilingual word clustering is to apply a twostep approachin a first step we determine classes s optimizing only the monolingual part of eq and secondly we determine classes f optimizing the bilingual part by using these two optimization processes we enforce that the classes e are monolingually good classes and that the classes 7 correspond to 6interestingly enough this results in a higher translation quality an efficient optimization algorithm for lpi is the exchange algorithm for the optimization of lp2 we can use the same algorithm with small modificationsour starting point is a random partition of the training corpus vocabularythis initial partition is improved iteratively by moving a single word from one class to anotherthe algorithm to determine bilingual classes is depicted in figure 1if only one word w is moved between the partitions c and c the change lp lp can be computed efficiently looking only at classes c for which ng 0 or ng 0we define mc to be the average number of seen predecessor and successor word classeswith the notation i for the number of iterations needed for convergence b for the number of word bigrams m for the number of classes and v for the vocabulary size the computational complexity of this algorithm is roughly i v m moa detailed analysis of the complexity can be found in the algorithm described above provides only a local optimumthe quality of the resulting local optima can be improved if we accept a shortterm degradation of the optimization criterion during the optimization processwe do this in our implementation by applying the optimization method threshold accepting which is an efficient simplification of simulated annealingthe statistical machinetranslation method described in makes use of bilingual word classesthe key element of this approach are the alignment templates which are pairs of phrases together with an alignment between the words of the phrasesexamples of alignment templates are shown in figure 2the advantage of the alignment template approach against wordbased statistical translation models is that word context and local reorderings are explicitly taken into accountthe alignment templates are automatically trained using a parallel training corpusthe translation of a sentence is done by a search process which determines the set of alignment templates which optimally cover the source sentencethe bilingual word classes are used to generalize the applicability of the alignment templates in searchif there exists a class which contains all cities in source and target language it is possible that an alignment template containing a special city can be generalized to all citiesmore details are given in we demonstrate results of our bilingual clustering method for two different bilingual corpora the eutransi corpus is a subtask of the quottraveller taskquot which is an artificially generated spanishenglish corpusthe domain of the corpus is a humantohuman communication situation at a reception table 3 example of bilingual word classes el how it pardon what when where which who why e2 my our e3 today tomorrow e4 ask call make e5 carrying changing giving looking moving putting sending showing waking e6 full half quarter si como cual cuando cuanta donde dice dicho hace que quien tiene desk of a hotelthe eutransii corpus is a natural germanenglish corpus consisting of different text types belonging to the domain of tourism bilingual web pages of hotels bilingual touristic brochures and business correspondencethe target language of our experiments is englishwe compare the three described methods to generate bilingual word classesthe classes mono are determined by monolingually optimizing source and target language classes with eqthe classes bil are determined by bilingually optimizing classes with eqthe classes bil2 are determined by first optimizing monolingually classes for the target language and afterwards optimizing classes for the source language and eqfor eutransi we used 60 classes and for eutransii we used 500 classeswe chose the number of classes in such a way that the final performance of the translation system was optimalthe cpu time for optimization of bilingual word classes on an alpha workstation was under 20 seconds for eutransi and less than two hours for eutransiitable 3 provides examples of bilingual word classes for the eutransi corpusit can be seen that the resulting classes often contain words that are similar in their syntactic and semantic functionsthe grouping of words with a different meaning like today and tomorrow does not imply that these words should be translated by the same spanish word but it does imply that the translations of these words are likely to be in the same spanish word classto measure the quality of our bilingual word classes we applied two different evaluation measures exp 31 both measures determine the extent to which the translation probability is spread outa small value means that the translation probability is very focused and that the knowledge of the source language class provides much information about the target language class sertionsdeletionssubstitutions relative to a reference translationas expected the translation quality improves using classesfor the small eutransi task the word error rates reduce significantlythe word error rates for the eutransii task are much larger because the task has a very large vocabulary and is more complexthe bilingual classes show better results than the monolingual classes monoone explanation for the improvement in translation quality is that the bilingually optimized classes result in an increased average size of used alignment templatesfor example the average length of alignment templates with the eutransi corpus using word is 285 and using bil2 it is 519the longer the average alignment template length the more context is used in the translation and therefore the translation quality is higheran explanation for the superiority of bil2 over bil is that by first optimizing the english classes monolingually it is much more probable that longer sequences of classes occur more often thereby increasing the average alignment template sizeby applying a maximumlikelihood approach to the joint probability of a parallel corpus we obtained an optimization criterion for bilingual word classes which is very similar to the one used in monolingual maximumlikelihood word clusteringfor optimization we used the exchange algorithmthe obtained word classes give a low translation lexicon perplexity and improve the quality of staproceedings of eacl 99 tistical machine translationwe expect improvements in translation quality by allowing that words occur in more than one class and by performing a hierarchical clusteringacknowledgements this work has been partially supported by the european community under the esprit project number 30268
E99-1010
an efficient method for determining bilingual word classesin statistical natural language processing we always face the problem of sparse dataone way to reduce this problem is to group words into equivalence classes which is a standard method in statistical language modelingin this paper we describe a method to determine bilingual word classes suitable for statistical machine translationwe develop an optimization criterion based on a maximumlikelihood approach and describe a clustering algorithmwe will show that the usage of the bilingual word classes we get can improve statistical machine translationwe show improvements on perplexity of bilingual corpus and word translation accuracy using a templatebased translation modelwe describe a method for determining bilingual word classes used to improve the extraction of alignment templates through alignments between classes not only between words
representing text chunks dividing sentences in chunks of words is a useful preprocessing step for parsing information extraction and information retrieval have introduced a quotconvenientquot data representation for chunking by converting it to a tagging task in this paper we will examine seven different data representations for the problem of recognizing noun phrase chunks we will show that the the data representation choice has a minor influence on chunking performance however equipped with the most suitable data representation our memorybased learning chunker was able to improve the best published chunking results for a standard data set the text corpus tasks parsing information extraction and information retrieval can benefit from dividing sentences in chunks of words describe an errordriven transformationbased learning method for finding np chunks in textsnp chunks are nonoverlapping nonrecursive noun phrasesin their experiments they have modeled chunk recognition as a tagging task words that are inside a basenp were marked i words outside a basenp received an 0 tag and a special tag b was used for the first word inside a basenp immediately following another basenpa text example original in n early trading ni in n hong kong n n monday ni n gold n was quoted at n 36650 ni n an ounce n tagged other representations for np chunking can be used as wellan example is the representation used in where all the chunkinitial words receive the same start tag while the remainder of the words in the chunk are paired with a different tagthis removes tagging ambiguitiesin the ratnaparkhi representation equal noun phrases receive the same tag sequence regardless of the context in which they appearthe data representation choice might influence the performance of chunking systemsin this paper we discuss how large this influence istherefore we will compare seven different data representation formats for the basenp recognition taskwe are particularly interested in finding out whether with one of the representation formats the best reported results for this task can be improvedthe second section of this paper presents the general setup of the experimentsthe results ean be found in the third sectionin the fourth section we will describe some related workin this section we present and explain the data representation formats and the machine learning algorithm that we have usedin the final part we describe the feature representation used in our experimentswe have compared four complete and three partial data representation formats for the basenp recognition task presented in the four complete formats all use an i tag for words that are inside a basenp and an 0 tag for words that are outside a basenpthey differ gold was quoted at s 36650 an ounce for seven different tagging formatsthe i tag has been used for words inside a basenp 0 for words outside a basenp b and e for basenpinitial words and e and for basenpfinal wordsjob the first word inside a basenp immediately following another basenp receives a b tag i0b2 all basenpinitial words receive a b tag ioe1 the final word inside a basenp immediately preceding another basenp receives an e tag10e2 all basenpfinal words receive an e tagwe wanted to compare these data representation formats with a standard bracket representationwe have chosen to divide bracketing experiments in two parts one for recognizing opening brackets and one for recognizing closing bracketsadditionally we have worked with another partial representation which seemed promising a tagging representation which disregards boundaries between adjacent chunksthese boundaries can be recovered by combining this format with one of the bracketing formatsour three partial representations are all basenpinitial words receive an tag other words receive a tagall basenpfinal words receive a tag other words receive a tagi0 words inside a basenp receive an i tag others receive an 0 tagthese partial representations can be combined in three pairs which encode the complete basenp structure of the data a word sequence is regarded as a basenp if the first word has received an tag the final word has received a tag and these are the only brackets that have been assigned to words in the sequence jo in the 10 format tags of words that have received an i tag and an tag are changed into b tagsthe result is interpreted as the 10b2 format10 in the jo format tags of words that have received an i tag and a tag are changed into e tagsthe result is interpreted as the 10e2 formatexamples of the four complete formats and the three partial formats can be found in table 1we have build a basenp recognizer by training a machine learning algorithm with correct tagged data and testing it with unseen datathe machine learning algorithm we used was a memorybased learning algorithm during training it stores a symbolic feature representation of a word in the training data together with its classification in the testing phase the algorithm compares a feature representation of a test word with every training data item and chooses the classification of the training item which is closest to the test itemin the version of the algorithm that we have used is 1ig the distances between feature representations are computed as the weighted sum of distances between individual features equal features are defined to have distance 0 while the distance between other pairs is some featuredependent valuethis value is equal to the information gain of the feature an information theoretic measure which contains the in their treatment of chunkinitial and chunkfinal 1 words normalized entropy decrease of the classification set caused by the presence of the featuredetails of the algorithm can be found in 1an important decision in an mbl experiment is the choice of the features that will be used for representing the data is 1ig is thought to be less sensitive to redundant features because of the datadependent feature weighting that is included in the algorithmwe have found that the presence of redundant features has a negative influence on the performance of the basenp recognizerin a set of transformational rules is used for modifying the classification of wordsthe rules use context information of the words the partofspeech tags that have been assigned to them and the chunk tags that are associated with themwe will use the same information as in our feature representation for wordsin tbl rules with different context information are used successively for solving different problemswe will use the same context information for all datathe optimal context size will be determined by comparing the results of different context sizes on the training datahere we will perform four stepswe will start with testing different context sizes of words with their partofspeech tagafter this we will use the classification results of the best context size for determining the optimal context size for the classification tagsas a third step we will evaluate combinations of classification results and find the best combinationfinally we will examine the influence of an mbl algorithm parameter the number of examined nearest neighborswe have used the basenp data presented in 2this data was divided in two partsthe first part was training data and consisted of 211727 words taken from sections 15 16 17 and 18 from the wall street journal corpus the second part was test data and consisted of 47377 words taken from section 20 of the same corpusthe words were partofspeech tagged with the brill tagger and each word was classified as being inside or outside a basenp with the iob1 representation schemethe chunking classification was made by based on the parsing information in the wsj corpusthe performance of the basenp recognizer can be measured in different ways by computing the percentage of correct classification tags the percentage of recognized basenps that are correct and the percentage of basenps in the corpus that are found we will follow and use a combination of the precision and recall rates fo1 in our first experiment series we have tried to discover the best wordpartofspeech tag context for each representation formatfor computational reasons we have limited ourselves to working with section 15 of the wsj corpusthis section contains 50442 wordswe have run 5fold crossvalidation experiments with all combinations of left and right contexts of wordpos tag pairs in the size range 0 to 4a summary of the results can be found in table 2the basenp recognizer performed best with relatively small wordpos tag pair contextsdifferent representation formats required different context sizes for optimal performanceall formats context sizes for the seven representation formats using 5fold crossvalidation on section 15 of the wsj corpus with explicit open bracket information preferred larger left context and most formats with explicit closing bracket information preferred larger right context sizethe three combinations of partial representations systematically outperformed the four complete representationsthis is probably caused by the fact that they are able to use two different context sizes for solving two different parts of the recognition problemin a second series of experiments we used a quotcascadedquot classifierthis classifier has two stages the first cascade is similar to the classifier described in the first experimentfor the second cascade we added the classifications of the first cascade as extra featuresthe extra features consisted of the left and the right context of the classification tagsthe focus chunk tag accounts for the correct classification in about 95 of the casesthe mbl algorithm assigns a large weight to this input feature and this makes it harder for the other features to contribute to a good resultto avoid this we have refrained from using this tagour goal was to find out the optimal number of extra classification tags in the inputwe performed 5fold crossvalidation experiments with all combinations of left and right classification tag contexts in the range 0 tags to 3 tagsa summary of the results can be found in table 33we achieved higher p31 for all representations except for the bracket pair representationthe third experiment series was similar to the second but instead of adding output of one experiment we added classification results of three four or five experiments of the first seriesby doing this we supplied the learning algorithm with information about different context sizesthis information is available to tbl in the rules which use different contextswe have limited ourselves to examining all successive combinations of three four and five experiments of the lists and a summary of the results can be found in table 4the results for four representation formats improvedin the fourth experiment series we have experimented with a different value for the number of nearest neighbors examined by the iblig algorithm this algorithm standardly uses the single training item closest to the test 3in a number of cases a different base configuration in one experiment series outperformed the best base configuration found in the previous seriesin the second series lr12 outperformed 22 for 10e2 when chunk tags were added and in the third series chunk tag context 11 outperformed 12 for iob1 when different combinations were tested right classification tag context sizes for the seven representation formats using 5fold crossvalidation on section 15 of the wsj corpus obtained with iblig parameter k3iob1 is the best representation format but the differences with the results of the other formats are not significant itemhowever report that for basenp recognition better results can be obtained by making the algorithm consider the classification values of the three closest training itemswe have tested this by repeating the first experiment series and part of the third experiment series for k3in this revised version we have repeated the best experiment of the third series with the results for k1 replaced by the k3 results whenever the latter outperformed the first in the revised first experiment seriesthe results can be found in table 5all formats benefited from this stepin this final experiment series the best results were obtained with iob1 but the differences with the results of the other formats are not significantwe have used the optimal experiment configurations that we had obtained from the fourth experiment series for processing the complete data setthe results can be found in table 6they are better than the results for section 15 because more training data was used in these experimentsagain the best result was obtained with iob1 which is an improvement of the best reported fi31 rate for this data set 9203we would like to apply our learning approach to the large data set mentioned in wall street journal corpus sections 221 as training material and section 0 as test materialwith our present hardware applying our optimal experiment configuration to this data would require several months of computer timetherefore we have only used the best stage 1 approach with iob1 tags a left and right context of three words and three pos tags combined with k3this time the chunker achieved a p31 score of 9381 which is half a point better than the results obtained by 933 the concept of chunking was introduced by abney in he suggested to develop a chunking parser which uses a twopart syntactic analysis creating word chunks and attaching the chunks to create complete syntactic treesabney obtained support for such a chunking stage from psycholinguistic literatureramshaw and marcus used transformationbased learning for developing two chunkers one was trained to recognize basenps and the other was trained to recognize both np chunks and vp chunksramshaw and marcus approached the chunking task as a tagging problemtheir basenp training and test data from the wall street journal corpus are still being used as benchmark data for current chunking experiments shows that basenp recognition is easier than finding both np and vp chunks and that increasing the size of the training data increases the performance on the test setthe work by ramshaw and marcus has inspired three other groups to build chunking algorithms introduce memorybased sequence learning and use it for different chunking experimentstheir algorithm stores sequences of pos tags with chunk brackets and uses this information for recognizing chunks in unseen datait performed slightly worse on basenp recognition than the experiments uses a related method but they only store pos tag sequences forming complete basenpsthese sequences were applied to unseen tagged data after which postprocessing repair rules were used for fixing some frequent errorsthis approach performs worse than other reported approaches training data setthe data was processed with the optimal input feature combinations found in the fourth experiment seriesthe accuracy rate contains the fraction of chunk tags that was correctthe other three rates regard basenp recognitionthe bottom part of the table shows some other reported results with this data setwith all but two formats islig achieves better fo1 rates than the best published result in uses cascaded decision tree learning for basenp recognitionthis algorithm stores context information of words pos tags and chunking tags in a decision tree and classifies new items by comparing them to the training itemsthe algorithm is very fast and it reaches the same performance as uses cascaded mbl in a similar way for several tasks among which basenp recognitionthey do not report foi rates but their tag accuracy rates are a lot better than accuracy rates reported by othershowever they use the data set in a different trainingtest division which makes it difficult to compare their results with otherswe have compared seven different data formats for the recognition of basenps with memorybased learning the i0b1 format introduced in consistently came out as the best formathowever the differences with other formats were not significantsome representation formats achieved better precision rates others better recall ratesthis information is useful for tasks that require chunking structures because some tasks might be more interested in high precision rates while others might be more interested in high recall ratesthe 0311g algorithm has been able to improve the best reported fo1 rates for a standard data set 9203this result was aided by using nonstandard parameter values and the algorithm was sensitive for redundant input featuresthis means that finding an optimal performance or this task requires searching a large parameterfeature configuration spacean interesting topic for future research would be to embed islig in a standard search algorithm like hillclimbing and explore this parameter spacesome more room for improved performance lies in computing the pos tags in the data with a better tagger than presently used
E99-1023
representing text chunksdividing sentences in chunks of words is a useful preprocessing step for parsing information extraction and information retrieval have introduced a convenient data representation for chunking by converting it to a tagging taskin this paper we will examine seven different data representations for the problem of recognizing noun phrase chunkswe will show that the data representation choice has a minor influence on chunking performancehowever equipped with the most suitable data representation our memorybased learning chunker was able to improve the best published chunking results for a standard data setwe describe in detail the iob schemes
on coreference resolution performance metrics the paper proposes a constrained entity alignment fmeasure for evaluatingcoreference resolution the metric is com puted by aligning reference and system entities with the constraint that a system entity is aligned with at most one reference entity we show that the best alignment is a maximum bipartite matching problem which can be solved by thekuhnmunkres algorithm comparative experiments are conducted to show that the widely known muc fmeasure has serious flaws in evaluating a coreference system the proposed metric is also compared with the acevalue the official evaluation metric in the automaticcontent extraction task and we con clude that the proposed metric possesses someproperties such as symmetry and better inter pretability missing in the acevalue a working definition of coreference resolution is partitioning the noun phrases we are interested in into equiv alence classes each of which refers to a physical entitywe adopt the terminologies used in the automatic con tent extraction task and call eachindividual phrase a mention and equivalence class an en tityfor example in the following text segment the american medical association voted yesterday to install the heir apparent as its presidentelect rejecting a strong upstart challenge by a district doctor who argued that the nations largest physiciansgroup needs stronger ethics and new leadershipmentions are underlined american medical associa tion itsand grouprefer to the same organization and they form an entitysimilarly the heir ap parentand presidentelectrefer to the same person and they form another entityit is worth pointing out that the entity definition here is different from what used in the message understanding conference task ace entity is called coreference chain or equivalence class in muc and ace mention is called entity in mucan important problem in coreference resolution is how to evaluate a systems performancea good performance metric should have the following two properties
H05-1004
on coreference resolution performance metricsthe paper proposes a constrained entityalignment fmeasure for evaluating coreference resolutionthe metric is computed by aligning reference and system entities with the constraint that a system entity is aligned with at most one reference entitywe show that the best alignment is a maximum bipartite matching problem which can be solved by the kuhnmunkres algorithmcomparative experiments are conducted to show that the widelyknown muc fmeasure has serious flaws in evaluating a coreference systemthe proposed metric is also compared with the acevalue the official evaluation metric in the automatic content extraction task and we conclude that the proposed metric possesses some properties such as symmetry and better interpretability missing in the acevaluewe use a bell tree to score and store the searching path
a discriminative matching approach to word alignment we present a discriminative large margin approach to featurebased matching for word alignment in thisframework pairs of word tokens re ceive a matching score which is basedon features of that pair including mea sures of association between the wordsdistortion between their positions sim ilarity of the orthographic form and soon even with only 100 labeled train ing examples and simple features whichincorporate counts from a large unlabeled corpus we achieve aer perfor mance close to ibm model 4 in muchless time including model 4 predic tions as features we achieve a relativeaer reduction of 22 in over inter sected model 4 alignments the standard approach to word alignment from sentencealigned bitexts has been to constructmodels which generate sentences of one language from the other then fitting those genera tive models with them this approach has two primary advantages and two primary drawbacksin itsfavor generative models of alignment are wellsuited for use in a noisychannel translation systemin addition they can be trained in an un supervised fashion though in practice they do require labeled validation alignments for tuning model hyperparameters such as null counts orsmoothing amounts which are crucial to pro ducing alignments of good qualitya primarydrawback of the generative approach to alignment is that as in all generative models explicitly incorporating arbitrary features of the in put is difficultfor example when considering whether to align two words in the ibm models one cannot easily include information about such features as orthographic similarity presence of the pair in various dictionaries similarity of the frequency of the two words choices made by other alignment systems on this sentence pair and so onwhile clever models can implicitly capture some of these information sources ittakes considerable work and can make the resulting models quite complexa second draw back of generative translation models is that since they are learned with them they require extensive processing of large amounts of data to achieve good performancewhile tools likegiza do make it eas ier to build on the long history of the generativeibm approach they also underscore how com plex highperformance generative models can and have becomein this paper we present a discriminative ap proach to word alignmentword alignment is cast as a maximum weighted matching problem in which each pair of words in a sentence pair is associated with a score s jk reflecting the desirability of the alignment of that pairthe alignment 73 for the sentence pair is then the highest scoring matching under some constraints for example the requirement that matchings be onetoonethis view of alignment as graph matching isnot in itself new melamed uses com petitive linking to greedily construct matchingswhere the pair score is a measure of word toword association and matusov et al find exact maximum matchings where the pair scores come from the alignment posteriors of generative modelstiedemann proposes incorporating a variety of word association cluesinto a greedy linking algorithmwhat we contribute here is a principled ap proach for tractable and efficient learning of the alignment score s jk as a function of arbitrary features of that token pairthis con tribution opens up the possibility of doing the kind of feature engineering for alignment that has been so successful for other nlp taskswefirst present the algorithm for large margin es timation of the scoring functionwe then showthat our method can achieve aer rates com parable to unsymmetrized ibm model 4 usingextremely little labeled data and a simple feature setremarkablyby including bidirectional ibm model 4 predic tions as features we achieve an absolute aer of 54 on the englishfrench hansards alignmenttask a relative reduction of 22 in aer over intersected model 4 alignments and to our knowl edge the best aer result published on this taskwe model the alignment prediction task as a maximum weight bipartite matching problem where nodes correspond to the words in the two sentencesfor simplicity we assume here that each word aligns to one or zero words in the other sentencethe edge weight s jkrepre sents the degree to which word j in one sentencecan translate into the word k in the other sen tenceour goal is to find an alignment that maximizes the sum of edge scoreswe represent a matching using a set of binary variables y jk that are set to 1 if word j is assigned to word k in the other sentence and 0 otherwisethe score of an assignment is the sum of edge scores s jk s jk y jk the maximum weight bipartite matching problem arg maxyy s canbe solved using well known combinatorial algo rithms or the following linear program max z jk s jk z jk st j z jk 1 k z jk 1 0 z jk 1 where the continuous variables z jk correspond to the binary variables y jk this lp is guaranteedto have integral solutions for any scoring function s note that although the above lp can be used to compute alignments combinatorial algorithms are generally more efficienthowever we use the lp to develop the learning algorithm belowfor a sentence pair x we denote position pairs by x jk and their scores as s jk we let us jk wf for some user provided fea ture mapping f and abbreviate wf jk y jk wfwe can include in the fea ture vector the identity of the two words their relative positions in their respective sentences their partofspeech tags their string similarity and so onat this point one can imagine estimating alinear matching model in multiple ways includ ing using conditional likelihood estimation anaveraged perceptron update or inlargemargin fashionconditional likelihood es timation using a loglinear model p 1 z w expwf requires summing over all matchings to compute the normalization zw which is pcomplete in ourexperiments we therefore investigated the aver aged perceptron in addition to the largemargin method outlined below21 largemargin estimationwe follow the largemargin formulation of taskar et al our input is a set of training instances m i1 where each in stance consists of a sentence pair x i and a target 74 alignment y i we would like to find parametersw that predict correct alignments on the train ing data y i arg max y i y i wf i where y i is the space of matchings appropriate for the sentence pair iin standard classification problems we typi cally measure the error of predictionusing the simple 01 lossin structured prob lems where we are jointly predicting multiple variables the loss is often more complexwhile the fmeasure is a natural loss function for this task we instead chose a sensible surrogate that fits better in our framework hamming distance between y i and yi which simply counts the number of edges predicted incorrectlywe use an svmlike hinge upper bound on the loss given by max y i y i wf i i wf i wherei and f i fminimizing this upper bound encourages the true alignment y i to be optimal with respect to w for each instance i min wi max y i y i wf i i wf i where is a regularization parameterin this form the estimation problem is a mixture of continuous optimization over w and com binatorial optimization over y i in order totransform it into a more standard optimization problem we need a way to efficiently handle the lossaugmented inference max y i y i wf i i this optimization problem has precisely the same form as the prediction prob lem whose parameters we are trying to learn max y i y i wf i but with an additionalterm corresponding to the loss functionour as sumption that the loss function decomposes over the edges is crucial to solving this problemin particular we use weighted hamming distance which counts the number of variables in which a candidate solution yi differs from the target output y i with different cost for false positives and false negatives i jk cy ijk cy ijk jk cy ijk jk c y ijk y ijk the lossaugmented matching problem can thenbe written as an lp similar to equation 1 max z jk z ijk wf c y ijk st j z ijk 1 k z ijk 1 0 z ijk 1hence without any approximations we have a continuous optimization problem instead of a combinatorial one max y i y i wf i i d i max z i z i z i where d i jk cy ijk is the constant term f i is the appropriate matrix that has a column of features f for each edge jk c i is the vector of the loss terms c y ijk and finally z i z i j z ijk 1 k z ijk 1 0 z ijk 1plugging this lp back into our estimation problem we have min wmax zz i wf i z i c i z i wf i y i where z z 1 z m z z 1 z m instead of the derivation in taskar et al which produces a joint convex optimization problem using lagrangian duality here we tackle the problem in its natural saddlepoint form22 the extragradient methodfor saddlepoint problems a wellknown solution strategy is the extragradient method which is closely related to projectedgradient methodsthe gradient of the objective in equation 2 is given by i f i and f i w c i we de note the euclidean projection of a vector onto z i as p z i arg minuz iv you and pro jection onto the ball w as p wmax75an iteration of the extragradient method con sists of two very simple steps prediction wt1 p zt1 i p z i and correction wt1 p zt1 i p z i where k are appropriately chosen step sizesthe method is guaranteed to converge linearly to a solution w zplease see wwwcsberkeleyedutaskarextragradientpdf for more detailsthe key subroutine of the algorithm is eu clidean projection onto the feasible sets z i incase of word alignment z i is the convex hull of bipartite matchings and the problem reduces to the muchstudied minimum cost quadratic flow problem the projection problem p z i is given by min z jk 1 2 2 st j z ijk 1 k z ijk 1 0 z ijk 1we can now use a standard reduction of bipar tite matching to min cost flow by introducing a source node connected to all the words in one sentence and a sink node connected to all thewords in the other sentence using edges of ca pacity 1 and cost 0the original edges jk have a quadratic cost 1 2 2 and capacity 1now the minimum cost flow from the source to the sink computes projection of zi onto z i we use standard publiclyavailable code for solving this problem we applied this matching algorithm to word level alignment using the englishfrench hansards data from the 2003 naacl shared task this corpus consists of 11m automatically aligned sentences and comes with a validation set of 39 sentence pairs and a test set of 447 sentencesthe validation and test sentences have been handaligned and are marked with both sure and possible alignmentsusing these alignments alignment error rate is calculated as aer 1 a s a p a s here a is a set of proposed index pairs s is the sure gold pairs and p is the possible goldpairsfor example in figure 1 proposed align ments are shown against gold alignments with open squares for sure alignments rounded open squares for possible alignments and filled black squares for proposed alignmentssince our method is a supervised algorithm we need labeled examplesfor the training data we split the original test set into 100 trainingexamples and 347 test examplesin all our ex periments we used a structured loss function that penalized false negatives 3 times more than false positives where 3 was picked bytesting several values on the validation setin stead of selecting a regularization parameter and running to convergence we used early stopping as a cheap regularization method by set ting to a very large value and running the algorithm for 500 iterationswe selected a stopping point using the validation set by simply picking the best iteration on the validation set in terms of aer all selected iterations turned out to be in the first 50 iterations as the algorithm converged fairly rapidly31 features and resultsvery broadly speaking the classic ibm mod els of wordlevel translation exploit four primary sources of knowledge and constraint association of words competition betweenalignments zero or firstorder preferences of alignment positions and fer tility we model all of these in some way 76 on e of th e ma jo r ob je ct iv es of th es e co ns ul ta ti on s is to ma ke su re th at th e re co ve ry be ne fi ts al l le un de les grands objectifs de les consultations est de faire en sorte que la relance profite egalement a tous on e of th e ma jo r ob je ct iv es of th es e co ns ul ta ti on s is to ma ke su re th at th e re co ve ry be ne fi ts al l le un de les grands objectifs de les consultations est de faire en sorte que la relance profite egalement a tous dice only dice and distance on e of th e ma jo r ob je ct iv es of th es e co ns ul ta ti on s is to ma ke su re th at th e re co ve ry be ne fi ts al l le un de les grands objectifs de les consultations est de faire en sorte que la relance profite egalement a tous on e of th e ma jo r ob je ct iv es of th es e co ns ul ta ti on s is to ma ke su re th at th e re co ve ry be ne fi ts al l le un de les grands objectifs de les consultations est de faire en sorte que la relance profite egalement a tous dice distance orthographic and bothshort all features figure 1 example alignments for each successive feature setexcept fertility1first and most importantly we want to include information about word association trans lation pairs are likely to cooccur together in a bitextthis information can be captured among many other ways using a feature whose 1in principle we can model also model fertility by allowing 0k matches for each word rather than 01 and having bias features on each wordhowever we did not explore this possibilityvalue is the dice coefficient dice 2cef c e c f here c e and c f are counts of word occurrences in each language while c ef is the number of cooccurrences of the two wordswith just this feature on a pair of word tokens we can already make a stab 77 at word alignment aligning say each english word with the french word with thehighest dice value sim ply as a matchingfree heuristic modelwith dice counts taken from the 11m sentences thisgives and aer of 387 with english as the tar get and 360 with french as the target as observed in melamed this use ofdice misses the crucial constraint of competition a candidate source word with high asso ciation to a target word may be unavailable for alignment because some other target has an even better affinity for that source wordmelameduses competitive linking to incorporate this con straint explicitly while the ibmstyle models get this effect via explainingaway effects in them trainingwe can get something much like the combination of dice and competitive linking by running with just one feature on each pair the dice value of that pairs words2 with just a dice feature meaning no learning is needed yet we achieve an aer of 298 between the dice with competitive linking result of 340 and model 1 of 259 given in och and ney an example of the alignment at this stage is shown in figure 1note that most errors lie off the diagonal for example the oftencorrect toa matchibm model 2 as usually implemented addsthe preference of alignments to lie near the di agonalmodel 2 is driven by the product of a wordtoword measure and a gaussian distribution which penalizes distortion from thediagonalwe can capture the same effect using features which reference the relative posi tions j and k of a pair in addition to amodel 2style quadratic feature referencing relative position we threw in the following proximity features absolute difference in relative posi tion abs and the square and squareroot of this valuein addition we used a con junction feature of the dice coefficient times the proximityfinally we added a bias feature on each edge which acts as a threshold that allows 2this is not quite competitive linking because we use a nongreedy matchingin 19 78 am er ic an s di vo rc ed 1 12 2 00 0 ti me s en 1978 on a enregistre1122000 divorces sur le continent in 19 78 am er ic an s di vo rc ed 1 12 2 00 0 ti me s en 1978 on a enregistre1122000 divorces sur le continent figure 2 example alignments showing the ef fects of orthographic cognate features dice and distance with orthographic featuressparser higher precision alignmentswith these features we got an aer of 155 note that we already have a capacity that model 2 does not we can learn a nonquadratic penalty with linear mixtures of our various components this gives a similar effect to learning the variance of the gaussian for model 2 but is at least in principle more flexible3 these features fix the toa error in figure 1 giving the alignment in figure 1on top of these features we included other kinds of information such as wordsimilarityfeatures designed to capture cognate informationwe added a feature forexact match of words exact match ignoring accents exact matching ignoring vowels and frac tion overlap of the longest common subsequencesince these measures were only useful for long words we also added a feature which indicatesthat both words in a pair are shortthese or thographic and other features improved aer to144the running example now has the align ment in figure 1 where one improvement may be attributable to the short pair feature it has stopped proposing thede partially because the short pair feature downweights the score of that paira clearer example of these features making a difference is shown in figure 2 whereboth the exactmatch and character overlap fea 3the learned response was in fact close to a gaussian but harsher near zero displacement78 tures are usedone source of constraint which our model stilldoes not explicitly capture is the firstorder de pendency between alignment positions as in thehmm model and ibm models 4the thele error in figure 1 is symp tomatic of this lackin particular it is a slightly better pair according to the dice value than the correct theleshowever the latter alignment has the advantage that majorgrands follows itto use this information source we included a feature which gives the dice value of the wordsfollowing the pair4 we also added a word frequency feature whose value is the absolutedifference in log rank of the words discourag ing very common words from translating to very rare onesfinally we threw in bilexical features of the pairs of top 5 nonpunctuation words ineach language5 this helped by removing spe cific common errors like the residual tendency for french de to mistakenly align to english the the resulting model produces the alignment in figure 1it has sorted out the thele theles confusion and is also able to guess tode which is not the most common translation for either word but which is supported by the good dice value on the following pair with all these features we got a final aer of 107 broadly similar to the 89 or 97 aers of unsymmetrized ibm model 4 trained on the same data that the dice counts were takenfrom6 of course symmetrizing model 4 by in tersecting alignments from both directions does yield an improved aer of 69 so while ourmodel does do surprisingly well with cheaply ob tained countbased features model 4 does still outperform it so farhowever our model can4it is important to note that while our matching algo rithm has no firstorder effects the features can encode such effects in this way or in better ways eg using as features posteriors from the hmm model in the style of matusov et al 5the number of such features which can be learned depends on the number of training examples and since some of our experiments used only a few dozen training examples we did not make heavy use of this feature6note that the common word pair features affectedcommon errors and therefore had a particularly large i am pact on aermodel aer dice 387 360 model 4 89 97 69 discriminative matching dice feature only 298 distance features 155 word shape and frequency 144 common words and nextdice 107 model 4 predictions 54 figure 3 aer on the hansards taskalso easily incorporate the predictions of model 4 as additional featureswe therefore added three new features for each edge the prediction of model 4 in the englishfrench direction the prediction in the frenchenglish direction and the intersection of the two predictionswith these powerful new features our aer dropped dramatically to 54 a 22 improvement over the intersected model 4 performanceanother way of doing the parameter estima tion for this matching task would have been to use an averaged perceptron method as in collins in this method we merely run our matching algorithm and update weights based on the difference between the predictedand target matchingshowever the perfor mance of the average perceptron learner on the same feature set is much lower only 81 not even breaking the aer of its best single feature 32 scaling experimentswe explored the scaling of our method by learn ing on a larger training set which we created by using giza intersected bidirectional model 4 alignments for the unlabeled sentence pairswe then took the first 5k sentence pairs from these 11m model 4 alignmentsthis gave us more training data albeit with noisier labelson a 34ghz intel xeon cpu giza took 18 hours to align the 11m words while ourmethod learned its weights in between 6 min utes and three hours 79we have presented a novel discriminative large margin method for learning wordalignment models on the basis of arbitrary features of wordpairswe have shown that our method is suitable for the common situation where a moder ate number of good fairly general features must be balanced on the basis of a small amount of labeled datait is also likely that the method will be useful in conjunction with a large labeled alignment corpus we presented features capturing a few separate sources of information producing alignments on the order of those given by unsymmetrized ibm model 4 in addition when given bidirectional model 4 predictions as features our method provides a 22 aer reduction over intersected model 4 predictions alonethe resulting 54 aer on the englishfrench hansarks task isto our knowledge the best published aer fig ure for this training scenario finally our method scales to large numbers of training sentences and trains in minutes rather than hours or days for thehighernumbered ibm models a particular ad vantage when not using features derived from those slower models
H05-1010
a discriminative matching approach to word alignmentwe present a discriminative largemargin approach to featurebased matching for word alignmentin this framework pairs of word tokens receive a matching score which is based on features of that pair including measures of association between the words distortion between their positions similarity of the orthographic form and so oneven with only 100 labeled training examples and simple features which incorporate counts from a large unlabeled corpus we achieve aer performance close to ibm model 4 in much less timeincluding model 4 predictions as features we achieve a relative aer reduction of 22 in over intersected model 4 alignmentswe use a large margin approach by factoring the structure level constraints to constraints at the level of an alignment linkwe use a onetoone constraint where words in either sentence can participate in at most one linkwe cast the problem of alignment as a maximum weight bipartite matching problem where nodes correspond to the words in the two sentences
a discriminative framework for bilingual word alignment bilingual word alignment forms the foun dation of most approaches to statisticalmachine translation current word align ment methods are predominantly based on generative models in this paper we demonstrate a discriminative approachto training simple word alignment mod els that are comparable in accuracy tothe more complex generative models nor mally used these models have the theadvantages that they are easy to add fea tures to and they allow fast optimization of model parameters using small amounts of annotated data bilingual word alignment is the first step of most current approaches to statistical machine translationalthough the best performing systems are phrase based possible phrasetranslations are normally first extracted from wordaligned bilingual text segmentsthe standard approach to word alignment makes use of various com binations of five generative models developed at ibm by brown et al sometimes augmented by an hmmbased model or och and neys model 6the best combinations of these models can produce high accuracy alignmentsat least when trained on a large corpus of fairly di rect translations in related languagesthese standard models are less than ideal how ever in a number of ways two of which we address in this paperfirst although the standard models cantheoretically be trained without supervision in prac tice various parameters are introduced that should be optimized using annotated datafor exampleoch and ney suggest supervised optimization of a number of parameters including the prob ablity of jumping to the empty word in the hmmmodel as well as smoothing parameters for the dis tortion probabilities and fertility probabilities of themore complex modelssince the values of these parameters affect the values of the translation align ment and fertility probabilities trained by them there is no effective way to optimize them other than torun the training procedure with a particular combination of values and evaluate the accuracy of the resulting alignmentssince evaluating each combina tion of parameter values in this way can take hours to days on a large training corpus it seems safe to say that these parameters are rarely if ever truly jointly optimized for a particular alignment taskthe second problem we address is the difficulty of adding features to the standard generative modelsgenerative models require a generative storyas to how the observed data is generated by an interrelatedset of stochastic processesfor example the gener ative story for ibm models 1 and 2 and the hmm alignment model is that a target language translation of a given source language sentence is generated byfirst choosing a length for the target language sentence then for each target sentence position choos ing a source sentence word and then choosing the corresponding target language wordwhen brown et al wanted to add a fertility component to create models 3 4 and 5 however this generative 81story did not fit any longer because it does not in clude how many target language words to align to each source language word as a separate decisionto model this explicitly they had to come up with a different generative storyin this paper we take a different approach to word alignment based on discriminative training of a weighted linear combination of a small number of featuresfor a given parallel sentence pair foreach possible word alignment considered we sim ply multiply the values of each of these features by a corresponding weight to give a score for that feature and sum the features scores to give an overall score for the alignmentthe possible alignment havingthe best overall score is selected as the word align ment for that sentence pairthus for a sentence pair we seek the alignment asuch that a argmaxa n i1 ifi where the fi are features and the i are weightswe optimize the model weights using a modified version of averaged perceptron learning as describedby collins this is fast to train because selecting the feature weights is the last step in build ing the model and the onlinenature of perceptronlearning allows the parameter optimization to con verge quicklyfurthermore no generative story has to be invented to explain how the features generate the data so new features can be easily added without having to change the overall structure of the modelin theory a disadvantage of a discrimintative ap proach compared to a generative approach is that it requires annotated data for trainingin practice however effective discriminative models for word alignment require only a few parameters which can be optimized on a set of annotated sentence pairs comparable in size to what is needed to tune the free parameters used in the generative approachas we will show a simple sequence of two such models can achieve alignment accuracy comparable to that of a combination of more complex standard modelswe develop two word alignment models incorpo rating different word association features intended to indicate how likely two words or groups of words are to be mutual translations plus additional features measuring how much word reordering is required bythe alignment1 and how many words are left un linkedone of the models also includes a feature measuring how often one word is linked to several wordseach of our feature scores have analogs in theibm and hmm modelsthe association scores corresponds to word translation probabilities the reordering scores correspond to distortion probabili ties the scores for words left unlinked corresponds to probabilities of words being linked to the nullword and the scores for onetomany links corre spond to fertility probabilities21 the loglikelihoodbased modelin our first model we use a loglikelihoodratio statistic as our measure of word associationwe chose this statistic because it has previously beenfound to be effective for automatically construct ing translation lexicons we compute llr scores using the following formula presented by moore llr fff eee clog pp in this formula f and e mean that the words whose degree of association is being measured occur in the respective target and source sentences of an alignedsentence pair f and e mean that the correspond ing words do not occur in the respective sentences fand eare variables ranging over these valuesand cis the observed joint count for the values of fand eall the probabilities in the for mula refer to maximum likelihood estimatesthe llr score for a pair of words is high if the words have either a strong positive association or a strong negative associationsince we expect translation pairs to be positively associated we discard any negatively associated word pairs by requiring thatp p pto reduce the memory re quirements of our algorithms we discard any word pairs whose llr score is less than 101we will use the term alignmentto mean an overall word alignment of a sentence pair and the term linkto mean the alignment of a particular pair of words or small group of words82in our first model the value of the word associa tion feature for an alignment is simply the sum of all the individual llr scores for the word pairs linkedby the alignmentthe llrbased model also in cludes the following features nonmonotonicity features it may be observed that in closely related languages word alignments of sentences that are mutual translations tend to be approximately monotonic even for distantly related languages the number of crossing links is far less than chance since phrases tend to be translated as contiguous chunksto model these tendencies we introduce two nonmonotonicity featuresto find the points of nonmonotonicity of a wordalignment we arbitrarily designate one of the lan guages as the source and the other as the targetwe sort the word pairs in the alignment first by source word position and then by target word positionwe then iterate through the sorted alignment looking only at the target word positionsthe points of nonmonotonicity in the alignment will be the places where there are backward jumps in this sequence of target word positionsfor example suppose we have the sorted alignment the sequence of target word positions in this sorted alignment is hence there is one point ofnonmonotonicity where target word position 2 fol lows target word position 5we still need to decide how to measure the degreeof nonmonotonicity of an alignmenttwo meth ods immediately suggest themselvesone is to sum the magnitudes of the backward jumps in the targetword sequence the other is to simply count the num ber of backward jumpsrather than choose between them we use both featuresthe onetomany feature it has often been observed that word alignment links tend to be oneto oneindeed word alignment results can often beimproved by restricting more general models to per mit only onetoone linksfor example och andney found that the intersection of the alignments found training the ibm models in both direc tions always outperformed either direction alone intheir experimentssince the ibm models allow one tomany links only in one direction this intersection can contain only onetoone linksto model the tendency for links to be onetoone we define a onetomany feature as the number of links connecting two words such that exactly one of them participates in at least one other linkwe also define a manytomany feature as the number of links that connect two words that both participate in other linkswe do not use this directly in the model but to cut down on the number of alignments we need to consider we discard any alignments having a nonzero value of the manytomany featurethe unlinked word feature to control the number of words that get linked to something we introduce an unlinked word feature that simply counts the total number of unlinked words in both sentences in an aligned sentence pair22 the conditionallinkprobabilitybasedmodelin this model we replace the llrbased word asso ciation statistic with the logarithm of the estimatedconditional probability of two words being linked given that they co occur in a pair of aligned sentencesthese estimates are derived from the best alignments according tosome other simpler modelfor example if for mer occurs 1000 times in english sentences whose french translations contain ancien and the simpler alignment model links them in 600 of those sentencepairs we might estimate the conditional link proba bility for this word pair as 06we find itbetter however to adjust these probabilities by sub tracting a small fixed discount from the link count lpd links 1 d cooc lpd represents the estimated conditional link probability for the words f and e links 1 is the number of times they are linked by the simpler alignment model d is the discount and coocis the number of times they cooccurthis adjust ment prevents assigning high probabilities to links between pairs of words that rarely cooccuran important difference between the llrbased model and clpbased model is that the llrbased model considers each wordtoword link separately but allows multiple links per word as long as they 83 lead to an alignment consisting only of onetoone and onetomany links in the clpbased model however we allow conditional probabilities for both onetoone and onetomany clusters but we require all clusters to be disjointfor example we estimate the conditional proba bility of linking not to nepas by considering the number of sentence pairs in which not occurs in the english sentence and both ne and pas occur in the french sentence compared to the number of timesnot is linked to both ne and pas in pairs of corre sponding sentenceshowever when we make this estimate in the clpbased model we do not count a link between not and nepas if the same instance of not ne or pas is linked to any other wordsthe clpbased model incorporates the same ad dtional features as the llrbased model except that it omits the onetomany feature since we assumethat the onetoone vs onetomany tradeoff is al ready modeled in the conditional link probabilities for particular onetoone and onetomany clusterswe have developed two versions of the clp based model using two different estimates for the conditional link probabilitiesone estimate of theconditional link probabilities comes from the llrbased model described above optimized on an an notated development setthe other estimate comes from a heuristic alignment model that we previously developed 2 space does not permit a full description of this heuristic model here butin brief it utilizes a series of greedy searches in spired by melameds competitive linking algorithm in which constraints limiting alignments tobeing onetoone and monotonic are applied at different thresholds of the llr score with a final cut off of the llr score below which no alignments are madewhile the discriminative models presented above are very simple to describe finding the optimal alignment according to these models is nontrivialadding a link for a new pair of words can affect the nonmonotonicity scores the onetomany score and the unlinked word score differently depending on 2the conditional link probabilities used in the current work are those used in method 4 of the earlier workfull details are provided in the referencewhat other links are present in the alignmentnever theless we have found a beamsearch procedure that seems highly effective in finding good alignments when used with these modelsfor each sentence pair we create a list of associa tion types and their corresponding scores consisting of the associations for which we have determined ascore and for which the words involved in the asso ciation type occur in the sentence pair3 we sort the resulting list of association types from best to worst according to their scoresnext we initialize a list of possible alignments with the empty alignment assigning it a score equalto the number of words in the sentence pair multi plied by the unlinked word weightwe then iterate through our sorted list of association types from best to worst creating new alignments that add links for all instances of the association type currently beingconsidered to existing alignments potentially keep ing both the old and new alignments in our set of possible alignmentswithout pruning we would soon be overwhelmed by a combinatorial explosion of alignmentsthe set of alignments is therefore pruned in two waysfirst we keep track at all times of the score of the best alignment we have seen so far and any new alignment whose overall score is worse than the bestscore so far by more than a fixed difference d is i am mediately discardedsecond for each instance of a particular alignment type when we have completed creating modified versions of previous alignments toinclude that instance we merge the set of new align ments that we have created into the set of previous alignmentswhen we do this merge the resulting set of alignments is sorted by overall score and only the n best alignments are kept for a fixed n some details of the search differ between the llrbased model and the clpbased modelonedifference is how we add links to existing align mentsin both cases if there are no existing links involving any of the words involved in the new link we simply add it if there are existing links involving word instances also involved in the new link the two mod 3by association type we mean a possible link between a pair of words or in the case of the clpbased models a possible onetomany or manytoone linkage of words84 els are treated differentlyfor the clpbased model each association score is for a cluster of words that must be disjoint from any other association cluster so when we add links for a new cluster we mustremove any other links involving the same word instancesfor the llrbased model we can add additional links without removing old ones but the resulting alignment may be worse due to the degra dation in the onetomany scorewe therefore add both an alignment that keeps all previous links and an additional set of alignments each of which omits one of the previous links involving one of the word instances involved in the new linkthe other difference in how the two models are treated is an extra pruning heuristic we use in thellrbased modelin generating the list of associ ation types to be used in aligning a given sentence pair we use only association types which have the best association score for this sentence pair for one of the word types involved in the associationweinitially explored limiting the number of associations considered for each word type simply as an ef ficiency heuristic but we were surprised to discover that the most extreme form of such pruning actually reduced alignment error rate over any less restrictive form or not pruning on this basis at allwe optimize the feature weights using a modified version of averaged perceptron learning as described by collins starting with an initial set of feature weight values perceptron learning iterates through the annotated training data multiple timescomparing for each sentence pair the best align ment ahyp according to the current model with the reference alignment aref at each sentence pair theweight for each feature is is incremented by the dif ference between the value of the feature for the best alignment according to the model and the value of the feature for the reference alignment i i fi the updated feature weights are used to compute ahyp for the next sentence pairiterating through the data continues until the weights stop changing because aref ahyp foreach sentence pair or until some other stopping con dition is metin the averaged perceptron the feature weights for the final model are the average of the weight values over all the data rather than simply the values after the final sentence pair of the final iterationwe make a few modifications to the procedure as described by collinsfirst we average the weight values over each pass through the data rather thanover all passes as we found this led to faster con vergenceafter each pass of perceptron learning through the data we make another pass through thedata with feature weights fixed to their average values for the previous learning pass to evaluate current performance of the modelwe iterate this pro cedure until a local optimum is foundnext we used a fixed weight of 10 for the wordassociation feature which we expect to be most i am portant feature in the modelallowing all weights tovary allows many equivalent sets of weights that dif fer only by a constant scale factorfixing one weight eliminates a spurious apparent degree of freedomthis necessitates however employing a version ofperceptron learning that uses a learning rate parameteras described by collins the perceptron up date rule involves incrementing each weight by the difference in the feature values being comparedifthe feature values are discrete however the mini mum difference may be too large compared to theunweighted association scorewe therefore multiply the feature value difference by a learning rate pa rameter to allow smaller increments when needed i i fi for the clpbased model based on the typical feature values we expected to see we guessed that001 might be a good value for the learning rate pa rameterthat seemed to produce good results so we did not attempt to further optimize the learning rate parameter for this modelthe situation with the llrbased model was more complicatedour previous experience using llr scores in statistical nlp applications indicated that with large data sets llr values can get very high but small difference could be signifi cannot which led us to believe that the same would be true of the weight values we were trying to learnthat meant that a learning rate small enough to let 85 us converge on the desired weight values might take a very large number of iterations through the data to reach those valueswe addressed this problem by using a progression of learning rates starting at 1000 reducing each successive weight by an order of magnitude until we ended with a learning rate of 10at each transition between learning rates we reinitialized the weights to the optimum values found with the previous learning ratewe experimented with one other idea for opti mizing the weight valuesperceptron learning does not directly optimize error rate but we have onlya small number of parameters that we need to op timizewe therefore thought it might be helpful to apply a general optimization procedure directlyto the error rate starting from the best parame ter values found by perceptron learning using then best alignments found with these parameter valueswe experimented with both the downhill sim plex method and powells method but we obtained slightly better results with a more heuristic method designed to look past minor local minimawe found that using this approach on top of perceptron learning led to slightly lower error rates on the development set with the clpbased model but not with the llrbase model so we used it only with the former in our final evaluationswe evaluated our models using data from the bilin gual word alignment workshop held at hltnaacl 2003 we useda subset of the canadian hansards bilingual cor pus supplied for the workshop comprising 500000englishfrench sentences pairs including 447 man ually wordaligned sentence pairs designated as test datathe test data annotates particular pairs ofwords either as sureor possiblelinksauto matic sentence alignment of the training data wasprovided by ulrich germann and the hand align ments of the words in the test data were created by franz och and hermann ney since our discriminative training approach requires a small amount of annotated data for parame ter optimization we split the test data set into two virtually equal subsets by randomly ordering the test data pairs and assigning alternate pairs from the random order to the two subsetswe used one ofthese subsets as a development set for parameter op timization and held out the other for a final test setwe report the performance of our alignment mod els in terms of precision recall and alignment error rate as defined by och and ney recall a s s precision a p a aer 1a p a s a s in these definitions s denotes the set of alignments annotated as sure p denotes the set of alignments annotated possible or sure and a denotes the set ofalignments produced by the method under testfol lowing standard practice in the field we take aer which is derived from fmeasure as the primary evaluation metric that we are attempting to optimizewe first trained the llrbased model by perceptronlearning using an n best value of 20 and an un bounded allowable score difference in the alignmentsearch using the development set as annotated train ing datawe then aligned all the sentences of length100 or less in our 500000 sentence pair corpus us ing an n best value of 20 and a maximum allowable score difference of 125000we collected link counts and cooccurrence counts from these alignments for estimating conditional link probabilitieswe trained clpbased models from these counts for a range of values for the discount used in the conditional link probability estimation finding a value of 04 to be a roughly optimal value of the discount parameter for the development setwe also trained a clpbased model using the conditional link probabilities from the heuristic alignment model mentioned previouslyin training both clpbased models we also used an n best value of 20 and an unbounded allowable score difference in the alignment searchwe evaluated three models on the final test data the llrbased model and the two clpbased models one with conditional link probabilities from 86 alignment recall precision aer llr 0829 0848 0160 clp 1 0889 0934 0086 clp 2 0898 0947 0075 table 1 discriminative model resultsalignment recall precision aer e f 0870 0890 0118 f e 0876 0907 0106 union 0929 0845 0124 intersection 0817 0981 0097 refined 0908 0929 0079 table 2 ibm model 4 resultsthe llrbased model and one with condi tional link probabilities from the heuristic alignment model all parameters were optimized onthe development setrecall precision and align ment error rates on the test set are shown in table 1for comparison we aligned our parallel corpus with ibm model 4 using ochs giza softwarepackage 4 we used the de fault configuration file included with the version ofgiza that we used which resulted in five itera tions of model 1 followed by five iterations of the hmmmodel followed by five iterations of model 4we trained the models in both directions english tofrench and frenchtoenglish and computed the union intersection and what och and ney call the refinedcombination of the two align mentswe evaluated the resulting alignments of the final test set with the results shown in table 2as these tables show our discriminatively trained clpbased models compare favorably to ibmmodel 4 on this data setthe one with condi tional link probabilities from the heuristic alignment model clp 2 performs slightly better than the best of the model 4 combinations and the one with conditional link probabilities from the llrbased model clp 1 performs only slightly worsean interesting question is why clp 2outper formed clp 1 clp1 is the more principledmodel so one might have expected it to perform betterwe believe the most likely explanation is the fact that 4thanks to chris quirk for carrying out this alignmentclp 2 received 403195 link probabilities from the heuristic model while clp 1 received only 144051 link probabilities from the llrbased modelhence clp 2 was able to consider more possible linksin light of our claims about the ease of optimiz ing the models we should make some commentson the time need to train the parametersour current implementation of the alignment search is writ ten in perl and is therefore quite slowalignmentof our 500000 sentence pair corpus with the llr based mode took over a day on a 28 ghz pentiumiv workstationnevertheless the parameter opti mization was still quite fast since it took only a few iterations over our 224 sentence pair developmentsetwith either the llrbased or clpbased models one combined learningevaluation pass of per ceptron training always took less than two minutes and it never took more that six passes to reach thelocal optimum we took to indicate convergenceto tal training time was greater since we used multiple runs of perceptron learning with different learningrates for the llrbased model and different condi tional link probability discounts for clp 1 but total training time for each model was around an hourwhen the first version of this paper was submitted for review we could honestly state we are not aware of any previous work on discriminative word alignment modelscallisonburch et al had investigated the use of small amounts of annotated data to help train the ibm and hmmmodels but the models were still generative and were trained using maximumlikelihood methodsrecently however three efforts nearly simultaneous with ours have made use of discriminative meth ods to train alignment modelsfraser and marcu modify model 4 to be a loglinear combina tion of 11 submodels and discriminatively optimize the submodel weights on each iteration of a viterbi approximation to themliu et al also develop a loglinear modelbased on ibm model 3they train model 3 us ing giza and then use the model 3 score of apossible alignment as a feature value in a discriminatively trained loglinear model along with fea 87 tures incorporating partofspeech information and whether the aligned words are given as translations in a bilingual dictionarythe loglinear model is trained by standard maximumentropy methodsklein and taskar in a tutorial on maximum margin methods for naturallanguage processing described a weighted linear model incorporat ing association position and orthography featureswith its parameters trained by a structuredsupportvectormachine methodthis model is in some respects very similar to our llrbased model us ing dice coefficient association scores where we use llr scores and absolute position differences where we use nonmonotonicity measuresthe results of our work and other recent efforts on discriminatively trained alignment models showthat results comparable to or better than those ob tained with the ibm models are possible within aframework that makes it easy to add arbitrary ad ditional featuresafter many years using the same small set of alignment models we now have an easy way to experiment with a wide variety of knowledge sources to improve wordalignment accuracy
H05-1011
a discriminative framework for bilingual word alignmentbilingual word alignment forms the foundation of most approaches to statistical machine translationcurrent word alignment methods are predominantly based on generative modelsin this paper we demonstrate a discriminative approach to training simple word alignment models that are comparable in accuracy to the more complex generative models normally usedthese models have the the advantages that they are easy to add features to and they allow fast optimization of model parameters using small amounts of annotated datallr can still be used for extracting positive associations by filtering in a preprocessing step words with possibly negative associationswe train two models we call stage 1 and stage 2 both in the form of a weighted linear combination of feature values extracted from a pair of sentences and a proposed word alignment of themwe use statistics like loglikelihoodratio and conditional likelihoodprobability to measure word associations
a maximum entropy word aligner for arabicenglish machine translation this paper presents a maximum entropyword alignment algorithm for arabic english based on supervised training datawe demonstrate that it is feasible to create training material for problems in machine translation and that a mixture of su pervised and unsupervised methods yields superior performance the probabilisticmodel used in the alignment directly models the link decisions significant improvement over traditional word alignment tech niques is shown as well as improvement onseveral machine translation tests perfor mance of the algorithm is contrasted with human annotation performance machine translation takes a source sequence s s1 s2 sk and generates a target sequence t t1 t2 tm that renders the meaning of the source sequence into the target sequencetypically algorithms operate on sentencesin the most general setup one or more source words can generate 0 1 or more target wordscurrent state of the art machine translation systems use phrasal features extracted automatically from parallel corporathese phrases are extracted using word alignment algorithms that are trained on parallel corporaphrases or phrasal features represent a mapping of source sequences into a target sequences which are typically a few words longin this paper we investigate the feasibility of training alignment algorithms based on supervised alignment dataalthough there is a modest cost associ ated with annotating data we show that a reduction of 40 relative in alignment error is possible over the giza aligner although there are a number of other applications for word alignment for example in creating bilingual dictionaries the primary application continues to be as a component in a machine translation systemwe test our aligner on several machine translation tests and show encouraging improvementsmost of the prior work on word alignments has been done on parallel corpora where the alignment at the sentence level is also done automaticallythe ibmmodels 15 produce word align ments with increasing algorithmic complexity and performancethese ibm models and more recent refinements as well as algorithms thatbootstrap from these models like the hmm algorithm described in are unsuper vised algorithmsthe relative success of these automatic techniques together with the human annotation cost has delayed the collection of supervised wordaligned corpora for more than a decade recently proposed a di rect alignment formulation and state that it would be straightforward to estimate the parameters givena supervised alignment corpusin this paper we ex tend their work and show that with a small amountof annotated data together with a modeling strat egy and search algorithm yield significant gains in alignment fmeasure89 show vany pal alvanyp secondwords wordnet the 2nd 2d pointed pwvyqaltaarw waart alwvyqpwords segmto aly aly source target papers document indicate point figure 1 alignment examplein order to describe the algorithm we will need to first describe the direct link modelfigure 1 shows two sequences where the top sequence is considered the source sequence and the bottom sequence the target sequenceeach sequence can have auxilliary information such as arabic segmentation or english wordnet information as showngiven the source and target sequences there are a number of different ways to link each target word to a sourcewordeach target word has a link li which indi cates which source position it links tothe range of li is from 0 to k and there are m of these linksthe source word position 0 is used to indicate null which we imagine gives rise to unaligned englishwordsin this paper we refer to these words as be ing spontaneousa valid link configuration has m linksdefine l to be the set of all possible valid link configurations and l to be a member of that setwe seek to maximize the alignment probability by finding the optimum link configuration lopt p argmax ll p p m i0 pwe factor this into a transition model and an obser vation model p 1z m i0 pp1where z is the normalizing constantwe factor the model as above so that the tran sition model computation which uses information available on the search hypotheses is reduced during the search processin the aligner presented here is always set to 05next we will describe the tran sition model then the observation model and finallythe experiments in alignment and machine transla tionin the ibm model 1 aligner the choice of the lan guage to serve as states of the search algorithm is not prescribed but practically the choice is important asit affects performanceto see this note that in gen erative models an input word can only be aligned toa single state in the searchin our current situation we are interested in aligning unsegmented ara bic words and typical words have a few affixes toindicate for example pronouns definiteness prepositions and conjunctionsin english these are sepa rate words and therefore to maximize performance the unsegmented arabic words serve as states in the search algorithm and we align english words to these states31 transition modelthe transition model tends to keep the alignmentsclose together and penalizes alignments in which ad jacent words in the target language come from very distant words in the source languagealso we would like to penalize many english words coming from the same arabic state we call this the state visit penalty and will be described laterin this paper we use a parametric form for the transition model p 1 z 1 dist 1ns 90 where ns represents the state visit penalty for state i z is the normalization constant and dist min a here a is a penalty for a zero distance transition andis set to 1 in the experiments belowthe min operator chooses the lowest cost transition distance ei ther from the previous state or the frontier state fi which is the right most state that has been visited this is a language specific criteria and in tended to model the adjective noun reversal between english and arabiconce the current noun phrase is completed the next word often aligns to the statejust beyond frontier stateas an example in fig ure 1 the verb pointedaligns to the first arabic word waart and aligning the toto its arabic counterpart alywould incur normally a distance of 3 but with the frontier notion it incurs only a penalty of 1 on the hypothesis that aligns the word secondto alvanypin this alignment with the frontier no tion there are only distance 1 transitions whereas the traditional shapes would incur a penalty of 2 for alignment of pointedand a penalty of 3 for the word tothe state visit penalty ns is the distance be tween the english words aligned to this state times the number of state visits1this penalty controls the fertility of the arabic wordsto determine the english words that aligned to the arabic positionthe search path is traced back for each hypothe sis and a sufficiently large beam is maintained sothat alignments in the future can correct past alignment decisionsthis penalty allows english determiners and prepositions to align to the arabic content word while penalizing distant words from align ing to the statein terms of alignment fmeasureto be described below the state visit penalty if re moved makes the performance degrade from f878 to f840 compared to removing the frontier notion which only degrades performance to f86932 observation modelthe observation model measures the linkage of the source and target using a set of feature functions defined on the words and their contextin figure 1 an event is a single link from an english word to an arabic state and the event space is the sentence pairwe use the maximum entropy formulation 1we are overloading the word stateto mean arabic word positionf h ti11 sk1 p 1z exp i ii where z is the normalizing constant z f exp i iiand i are binary valued feature functionsthe function selects the arabic word at the position being linked or in the case of segmentation featuresone of the segmentations of that positionwe re strict the history context to select from the current english word and words to the left as well as thecurrent words wordnet synset as re quired by the features defined belowas in the above functions simplify the con ditioning portion h by utilizing only the words andcontext involved in the link litraining is done us ing the iis technique and convergence often occurs in 310 iterationsthe five types of features which are utilized in the system are described belowphrase to phrase alignments are intepreted as each english word com ing from each of the arabic words321 lexical features the lexical features are similar to the translationmatrix of the ibm model 1however there is a sign ficant out of vocabulary issue in the model since training data is limitedall words that have a corpus frequency of 1 are left out of the model and classed into an unknown word class in order to explicitly model connecting unknown wordsfrom the training data we obtain 50k lexical features and applying the arabic segmenter obtain another 17k lexical features of the form 322 arabic segmentation features an arabic segmenter similar to provides the segmentation featuresa small dictionary is used to restrict the set of ara bic segments that can align to english stopwords for example that thealigns to aland that for inand toalign to band heraligns with the suffix hasegmentation features also help align un known words as stems might be seen in the training corpus with other prefixes or suffixesadditionally the ability to align the prefix and suffix accuratelytends to dragthe unknown stem to its english tar get91 323 wordnet features wordnet features provide normalization on the english wordsthe feature is instantiated for nounsadjectives adverbs and verbs following their definitions in wordnetif the arabic word has a seg mentation then the feature is otherwise it is the feature ties together english syn onyms and helps improve recall of the aligner324 spelling feature the spelling feature is applied only on unknownwords and is used to measure the string kernel dis tance between romanized arabicand english wordsthe feature is designed primar ily to link unknown namesfor example clintonis written as klyntwnin one of its romanized arabic versionsin a sentence measuring the string ker nel distance shows a correlation between these names even though there is not much overlap between thecharactersthe feature has four possible values no match somematch goodmatch and exact325 dynamic features dynamic features are defined on the lattice of thesearch algorithmthese features fire when the pre vious source and target word pair are linkedfor example one such feature is b inand if on the hypothesis we have just linked this pair and the nextenglish word is being aligned to the stem of the ara bic word where this prefix occurs this feature fires and boosts the probability that the next words are alignedthe basic intuition behind this feature is that words inside prepositional phrases tend to align which is similar to the dependency structure feature of at training time the lattice reduces to the single path provided by the annotationsince this fea ture tends to suffer from the drag of function words we insist that the next words that are being linked have at least one feature that appliesall word pairslinked in the training data have lexical features as de scribed above and if both source and target words are unknown they have a single feature for their linkapplying dynamic features on words that have atleast one other feature prevents words which are completely unrelated from being linked because of a fea ture about the context of the wordstwo types of dynamic features are distinguished english word with arabic prefixsuffix and english word with arabic stemsince the annotated training data for word alignmentis limited and a much larger parallel corpus is avail able for other aligners we smooth the observation anno1 anno1anno2 correction anno1 965 924 917 anno1952 932 table 1 fmeasure for human performance on word alignment for arabicenglishprobability with an ibm model 1 estimate p 1 z pmepm11where is set to 09 in the experiments belowin the equation above the s represents the arabic word that is being linked from the english word tiwhen is set to 10 there is no smoothing per formed and performance degrades to f840 from the best system performance when isset to 0 the model uses only the ibm model 1 distri bution and the resulting aligner is similar to an hmm aligner with the transition shape discussed above and yields performance of f732a beam search algorithm is utilized with the english words consumed in sequence and the arabic word positions serving as states in the search processinorder to take advantage of the transition model de scribed above a large beam must be maintainedto see this note that english words often repeat in a sentence and the models will tend to link the wordto all arabic positions which have the same ara bic contentin traditional algorithms the markov assumption is made and hypothesis are merged if they have the same history in the previous time stephowever here we maintain all hypotheses and merge only if the paths are same for 30 words which is the average sentence lengthwe have word aligned a portion of the arabic tree bank and material from the ldc news sources to obtain a total of 103k sentence pairs for trainingas a test of alignment we use the first 50 sentences of the mt03 evaluationtest set which has 1313 arabic words and 1528 en glish words 2in terms of annotation guidelines we use the following instructions align determiners to their head nouns alignments are done word by word unless the phrase is idiomatic in which case the entire phrase to phrase alignment was marked spontaneous words are marked as being part of a 2the test data is available by contacting the authors92 1k 3k 5k 7k 9k 103k of features 15510 32111 47962 63140 73650 80321 english oov 159 82 55 44 405 36 arabic oov 31 196 156 132 108 103 fmeasure 832 854 865 874 875 878 table 2 varying training data sizephrase wherever possible but left unaligned if there is no evidence to link the wordin order to measure alignment performance we use the standard aer measure but consider all links as surethis measure is then related to the fmeasure which can be defined in terms of precision and recall as precision the number of correct word links over the total number of proposed linksrecall the number of correct word links over the total number of links in the referenceand the usual definition of the fmeasure f 2pr and define the alignment error as aer 1 f in this paper we report our results in terms of f measure over aligned linksnote that links to thenull state are not included in the fmeasuresystems are compared rel ative to the reduction in aer61 annotator agreementwe measure intrainterannotator agreement on thetest set in order to determine the feasibility of hu man annotation of word linksthese are shown in table 1in the table the column for annotator 1 correctionis the first annotator correcting his own word alignments after a span of a yearafter two weeks the annotator was given the same material with all the links removed and asked to realign and we see that there is more discrepancy in resulting alignmentsthe differences are largely on the head concept where determiners are attachedand the alignment of spontaneous wordsthe perfor mance with a second annotator is in the same range as the reannotation by a single annotatorin order to evaluate the performance of the algo rithm we investigate the effect due to increasing the training data size additional feature types and comparable algorithms71 training data sizewe varied the training data size from 1k sentences to the complete set in table 2each batch reestimates the unknown word class by creating a vocabulary on the training setthe trend indicates a reasonable progression of performance and more data is required to determine the saturation point72 feature typesthe results obtained by different feature sets areshown in table 3each feature type was added incre mentally to the line above to determine the effect of the individual feature typesand then removed incrementally from the full sys tem in order to see the final effectthe results indicate that lexical featuresare the most important type of feature segmenta tion features further reduce the aer by 158the other features add small gains in performance whichalthough are not statistically significant for the align ment fmeasure are important in terms of feature extractionsegmentation features discussed above result in both suffix and prefix features as well asstem featuresin the subtract column for the seg mentation feature only the suffix and prefix features were removedthis result indicates that most of thealignment improvement from the segmentation fea ture comes in the form of new lexical features to link arabic stems and english words73 comparison to other alignmentalgorithms in order to gauge the performance of the algorithmwith respect to other alignment strategies we provide results using giza and an hmm max poste rior algorithm these algorithms as well as the model 1 smoothing for the maxent aligner are all trained on a corpus of 500k sentence pairsfrom the un parallel corpus and the ldc news cor pora released for 2005 note that these algorithms are unsupervised by design but we utilizethem to have a baseline for comparing the perfor mance of this supervised approach731 hmm max posterior alignerthe maximumposterior word alignments are obtained by finding the link configuration that maxi 93 system of add subtract feats feature feature word pairs 50070 8503 763 spelling 4 8511 877 segmentation 70 8739 875 wordnet 13789 8754 875 dynamicwords 1952 8780 871 dynamicsegmentation 42 8784 878 table 3 alignment performance in terms of the feature types utilizedfmeasure giza 795 hmm 763 maxent 878 table 4 alignment performance mizes the posterior state probabilityin contrast in performing a viterbi alignment we compute the best state sequence given the observationthe maximum posterior computes the best state one at a time and iterates over all possible combinationsonce we find the maximum in the posterior probability matrixwe also know the corresponding state and observa tion which is nothing but the word pair we will then align the pair and continue to find the next posterior maximum and align the resulting pairat each iteration of the process a word pair is alignedthe process is repeated until either every word in one language is aligned or no more maximum can be found whichever happens first732 giza alignment in order to contrast our algorithm we rangiza in the standard configuration which i am plies 5 iterations of ibm model 1 hmm model 3 and model 4all parameters are left to their default valuesthe results using the three different aligners is shown in table 4the reduction in aer over thegiza system is 405 and over the hmm sys tem is 485the wilcoxon signedrank test yieldsa probability of 039 for rejecting the giza align ment over the hmm alignment whereas the maxent algorithm should be rejected with a probability of17e6 over the hmm algorithm and similarly maxent should be rejected with a probability of 09e 6 over the giza algorithmthese significance tests indicate that the maxent algorithm presented above is significantly better than either giza or hmmfigure 2 an alignment showing a split link from an arabic wordonce an alignment is obtained phrases which sat isfy the inverse projection constraint are extractedthis constraint enforces that a sequence of source words align to a sequence of target words as defined by the lowest and highest target index and when the target words are projected back to the source language through the alignment the original source sequence is retrievedexamination of the hand alignment training datashowed that this criteria is often violated for arabic and englishprepositional phrases with adjectives often require a splitfor example the align ment shown in figure 2 has of its relationsaligned to a word in arabic and tensealigned to the next wordthe inverse projection constraint fails in thiscase and in the experiments below we relax this con straint and generate features for single source words as long as the target phrase has a gap less than 2english wordsthis relaxation allows a pair of ad jectives to modify the head nounin future work we explore the use of features with variables to be filled at decode timethe experiments in machine translation are carriedout on a phrase based decoder similar to the one de 94 mt03 mt04 mt05 giza 0454 hmm 0459 0419 0456 maxent 0468 0433 0451 combined 0479 0437 0465 significance 0017 0020 table 5 machine translation performance using the nist 2005 bleu scorerscribed in in order to con trast the performance of the extracted features we compare the translation performance to a system built from alignments proposed by an hmm max posterior aligner and a system built from gizaalignmentsall other parameters of the decoder re main constant and only the feature set is changed for these experimentsas training data we use the un parallel corpus and the ldc news corpora released in 2005comparison should therefore be only madeacross systems reported here and not to earlier eval uations or other systemsthe results are shown in table 5combination of the phrasal features from thehmm and maxent alignments results in the combinedsystemthe combined system performs bet ter in all cases in mt03 and mt04 the maxentderived features perform better than the hmm sys temin mt05 there is a slight degradation which isnot significant and the combination system still re sults in an improvement over either systemsince the maxent aligner has access to a unique resourceevery attempt was made to make that resource avail able to the other systemsalthough giza and hmm can not directly utilize word aligned data thetraining data for maxent was converted to paral lel sentences where each sentence has only the pair of linked wordsthe resulting numbers make both hmm and giza much closer in performance to themaxent aligner but the results are better for com paring alignment methodsthe alignment errors made by the system can be attributed to english words that require multiword arabic states for example dates which are writtenin arabic in more than one form kanwn al vany ynayrfor january and compound words like ram allhin english is ramallahrare translation of a common arabic word as well as a common english word used as the translation for a rare arabic wordparallel corpora mismatch training material for translation is processed at a document level and yet systems often operate at a sentence levelhuman translators often use pronouns for earlier mentioned names although in the source lan guage the name is repeatedinformation whichis sometimes repeated in the source in an ear lier sentence is dropped in future sentences ofthe documentdocument level features are re quired to allow the system to have information to leave these words unalignedfigure 3 shows a human alignment on the left and a machine output on the rightthe columns next to the words indicate whether the alignments are goodor extrawhich indicates that these words are aligned to the special null statethere are two examples of multiword arabic states shown for january and the english word agendathe system aligns thebefore committee and it seemsin this case its an annotation errorin this exam ple the arabic words lnahyp altnzym walaedadand allwjsty are all unknown words in the vocabu lary yet the system managed to link 3 out 4 words correctlywhile significant gains have been made in align ment performance these gains have not directly translated to machine translation improvementsin fact although the giza system is better than the hmm system at alignment the machine translationresult on mt03 indicates a slight degradation the prime reason for this is that features extracted from the alignments are aggregated over the training corpusand this process helps good alignments to have significantly better counts than errors in alignmentalign ing rare words correctly should help performance but since their count is low it is not reflected in bleu scoresthis paper presented a word aligner trained on anno tated datawhile the performance of the aligner isshown to be significantly better than other unsuper vised algorithms the utility of these alignments in machine translation is still an open subject although gains are shown in two of the test setssince featuresare extracted from a parallel corpus most of the in formation relating to the specific sentence alignment is lost in the aggregation of features across sentencesimprovements in capturing sentence context could allow the machine translation system to use a rare but correct link appropriatelyanother significant result is that a small amount of wordaligned data is sufficient for this algorithm since a provision is made to handle 95 figure 3 an example sentence with human output on the left and system output on the rightunknown words appropriatelythis work was partially supported by the defense advanced research projects agency and monitored by spawar under contract non660019928916the views and findings contained in this material are those of the authors and do not necessarily reflect the position or policy of the yous government and no official endorsement should be inferredthis paper owes much to the collaboration of the statistical mt group at ibm
H05-1012
a maximum entropy word aligner for arabicenglish machine translationthis paper presents a maximum entropy word alignment algorithm for arabicenglish based on supervised training datawe demonstrate that it is feasible to create training material for problems in machine translation and that a mixture of supervised and unsupervised methods yields superior performancethe probabilistic model used in the alignment directly models the link decisionssignificant improvement over traditional word alignment techniques is shown as well as improvement on several machine translation testsperformance of the algorithm is contrasted with human annotation performancewe present a discriminatively trained 1ton model with feature functions specifically designed for arabicwe train a discriminative model on a corpus of ten thousand word aligned arabicenglish sentence pairs that outperforms a giza baseline
local phrase reordering models for statistical machine translation we describe stochastic models of localphrase movement that can be incorporated into a statistical machine translation system these models pro vide properly formulated nondeficient probability distributions over reorderedphrase sequences they are implemented by weighted finite state trans ducers we describe themstyle parameter reestimation procedures based on phrase alignment under the complete translationmodel incorporating reordering our ex periments show that the reordering modelyields substantial improvements in trans lation performance on arabictoenglish and chinesetoenglish mt tasks we also show that the procedure scales as the bitext size is increased word and phrase reordering is a crucial component of statistical machine translation systemshowever allowing reordering in translation is computationally expensive and in some cases even prov ably npcomplete therefore any translation scheme that incorporates reordering must necessarily balance model complexity against the ability to realize the model without approximationin this paper our goal is to formulate models of lo cal phrase reordering in such a way that they can be embedded inside a generative phrasebased model this work was supported by an onr muri grant n000140110685of translation although thismodel of reordering is somewhat limited and can not capture all possible phrase movement it forms a proper parameterized probability distribution over reorderings of phrase sequenceswe show that with this model it is possible to perform maximum aposteriori decoding and ex pectation maximization style reestimation of model parameters over large bitext collectionswe now discuss prior work on word and phrase reordering in translationwe focus on smt systemsthat do not require phrases to form syntactic con stituentsthe ibm translation models describe word reordering via a distortion model de fined over word positions within sentence pairsthe alignment template model usesphrases rather than words as the basis for transla tion and defines movement at the level of phrasesphrase reordering is modeled as a first order markovprocess with a single parameter that controls the de gree of movementour current work is inspired by the block orientation model introduced by tillmann in which reordering allows neighbor ing blocks to swapthis is described as a sequence of orientations relative to themonotone block ordermodel parameters are blockspecific and estimated over word aligned trained bi text using simple heuristicsother researchers have reported performance gains in translation by allowing deviations from monotone word and phrase orderin these cases 161 0c 4c 5c 0d 1d 1v 2v 3v 4v 5v 6v 7v 1f 2f 3f 4f 5f 6f 7f 8f 9f 2d 3d 4d 5d 2c 3c1c x 1 x 2 x 3 x 4 x 5 1e 5e 7e2e 3e 4e 6e 9e8e you 1 you 2 you 3 you 4 you 5 y 1 y 5y 4y 3y 2 doivent de_25_exportationsgrains flchir exportations grains de_25_doivent flchir 1 les exportations de les exportations de grains doivent flchir de 25 grains doivent flchir de_25_ 1exportations doiventgrains flchir de_25_ grain exports are_projected_to by_25_ grain exports are projected to fall by 25 sentence fall source language target language sentence figure 1 ttm generative translation process here i 9k 5 are 7 j 9reordering is not governed by an explicit probabilis tic model over reordered phrases a language model is employed to select the translation hypothesiswealso note the prior work of wu closely re lated to tillmanns modelthe translation template model is a genera tive model of phrasebased translation bitext is described via a stochastic processthat generates source sentences and trans forms them into target sentences p p source language model g p source phrase segmentation w p phrase translation and reordering r p target phrase insertion p target phrase segmentation the ttm relies on a phrasepair inventory consisting of target language phrases and theirsource language translationstranslation is mod eled via component distributions realized as wfsts source language model source phrase segmentation phrase transla tion and reordering target phrase insertion and target phrase segmentation ttm reordering previously the ttm was for mulated with reordering prior to translation herewe perform reordering of phrase sequences follow ing translationreordering prior to translation was found to be memory intensive and unwieldy in contrast we will show that the cur rent model can be used for both phrase alignment and translation21 the phrase reordering modelwe now describe two wfsts that allow local reordering within phrase sequencesthe simplest allows swapping of adjacent phrasesthe second allows phrase movement within a three phrase win dowour formulation ensures that the overall modelprovides a proper parameterized probability distribution over reordered phrase sequences we empha size that the resulting distribution is not degeneratephrase reordering takes as its input a french phrase sequence in english phrase order x1 x2 xk this is then reordered into french phrase order y1 y2 yk note that words within phrases are not affectedwe make the following conditional independence assumption p p given an input phrase sequence xk1 we now associate a unique jump sequence bk1 with each per missible output phrase sequence yk1 the jump bk measures the displacement of the kth phrase xk ie xk ykbk k 1 2 k the jump sequence bk1 is constructed such that yk1is a permutation of xk1 this is enforced by con structing all models so that k k1 bk 0we now redefine the model in terms of the jump sequence p p ykbk xk k 0 otherwise 162 x 2 x 3 x 4 x 5x 1 y 2 y 3 y 4 y 5y 1 3b 01b 12b 1 4b 0 5b 0 doivent de_25_exportations flchir exportations grains de_25_doivent flchir grains figure 2 phrase reordering and jump sequencewhere yk1 is determined by xk1 and bk1 each jump bk depends on the phrasepair and preceding jumps bk11 p kk1 p where k1 is an equivalence classification of the jump sequence bk11 the jump sequence bk1 can be described by a deterministic finite state machine is the state arrived at by bk11 we will use k1 to denote we will investigate phrase reordering by restrict ing the maximum allowable jump to 1 phrase and to 2 phrases we will refer to these reordering models as mj1 and mj2in the first case bk 011 while in the second case bk 0112222 reordering wfst for mj1we first present the finite state machine of the phrase reordering process which has twoequivalence classes for any given his tory bk11 1 2a jump of 1 has to be followed by a jump of 1 and 1 is the start and end state this ensures k k1 bk 01 b1 b1 b0 2 figure 3 phrase reordering process for mj1under this restriction the probability of the jump bk can be simplified as p 1 bk 1 k1 1 1 1 bk 0 k1 1 1 bk 1 k1 2there is a single parameter jump probability 1 p associated with each phrasepair in the phrasepair inventorythis is the probability that the phrasepair appears out of order in the transformed phrase sequencewe now describe the mj1 wfstin the presentation we use uppercase letters to denote the en glish phrases and lowercase letters to denote the french phrases the ppi for this example is given in table 1english french parameters you x p 1 a a 05 02 a d 05 02 b b 10 04 c c 10 03 d d 10 08table 1 example phrasepair inventory with trans lation and reordering probabilitiesthe input to the wfst is a lattice of french phrase sequences derived from the frenchsentence to be translatedthe outputs are the cor responding english phrase sequencesnote that the reordering is performed on the english sidethe wfst is constructed by adding a selfloop for each french phrase in the input lattice and a 2arc path for every pair of adjacent french phrases in the latticethe wfst incorporates the translation model p and the reordering model p the score on a selfloop with labels is p on a 2arc path with labels and the score on the 1st arc is p 1 and on the 2nd arc is p in this example the input to this transducer is asingle french phrase sequence v a b c we per form the wfst composition rv project the result on the input labels and remove the epsilons to form the acceptor 1 which contains the six english phrase sequences translation given a french sentence a lattice of translations is obtained using the weighted finite state composition t g w are t the mostlikely translation is obtained as the path with the highest probability in t alignment given a sentencepair a lattice of phrase alignments is obtained by the finite state composition b s w are t where 163 a b 01 a b d 04 x 06 x 02 0480 b a d 04 x 05 x 02 0040 a d b 04 x 08 x 04 0128 a a b 04 x 01 x 04 0016 a b a 04 x 06 x 04 0096 b a a 04 x 05 x 04 0080 vr 1 a b 05 r v a b d b b 06d d 02 a d 04a a 04 b a 04 b d 04 d b 08 figure 4 wfst for the mj1 models is an acceptor for the english sentence e and t is an acceptor for the french sentence f theviterbi alignment is found as the path with the high est probability in b the wfst composition gives the wordtoword alignments between the sentenceshowever to obtain the phrase alignments we need to construct additional fsts not described here23 reordering wfst for mj2mj2 reordering restricts the maximum allowablejump to 2 phrases and also insists that the reorder ing take place within a window of 3 phrasesthis latter condition implies that for an input sequence a b c d we disallow the three output sequences b d a c c a d b c d a b in the mj2 finite state machine a given history bk11 can lead to one of the six states in fig 5b0 1 23 45 6 b1 b1b1 b2 b0 b2 b1 b1 b2 figure 5 phrase reordering process for mj2the jump probability of eqn 5 becomes p 1 bk 1 k1 1 2 bk 2 k1 1 1 1 2 bk 0 k1 1 1 bk 1 k1 2 1 1 bk 1 k1 2 05 bk 0 k1 3 05 bk 1 k1 3 1 bk 2 k1 4 1 bk 2 k1 5 1 bk 1 k1 6 we note that the distributions are based on two parameters 1 and 2 for each phrasepair suppose the input is a phrase sequence a b c the mj2 model allows 6 possible reorderingsa b c a c b b a c b c a c a b c b a the distri bution eqn 9 ensures that the sequences b c a andc b a are assigned equal probabilitythe distribu tions in eqns 1012 ensure that the maximum jump is 2 phrases and the reordering happens within awindow of 3 phrasesby insisting that the pro cess start and end at state 1 we ensure that the model is not deficienta wfst implementing the mj2 model can be easily constructed for bothphrase alignment and translation following the con struction described for the mj1 modelthe translation template model relies on an in ventory of target language phrases and their source language translationsour goal is to estimate the reordering model parameters p for each phrasepair in this inventoryhowever when translating a given test set only a subset of the phrasepairs is neededalthough there may be an advantage in estimating the model parameters under an inventory that covers all the training bitext we fix the phrasepair inventory to cover only the phrases on the test setestimation of the reordering model parameters over the training bitext is then performed under this testset specific inventory164 we employ the them algorithm to obtain maximum likelihood estimates of the reordering model parametersapplying them to the mj1 reordering model gives the following ml parameter estimates for each phrasepair 1 cxyou cxyou cxyou cxyou is defined for 1 2 and b 1 01any permissible phrase alignment of a sentence pair corresponds to a bk1 sequence which in turn specifies a k1 sequencecxyou is the expected number of times the phrasepair x you isaligned with a jump of b phrases when the jump history is we do not use full them but a viterbi train ing procedure that obtains the counts for the best alignmentsif a phrasepair is never seen in the viterbi alignments we backoff to a flat parameter 1 005the ml parameter estimates for the mj2 modelare given in table 2 with cxyou defined similarlyin our training scenario we use wfst op erations to obtain viterbi phrase alignments of the training bitext where the initial reordering model parameters are set to a uniform value of 005the counts cxyou are then obtained over the phrase alignmentsfinally the ml estimates of the parameters are computed using eqn 13 or eqn 14 we will refer to the viterbi trained models as mj1 vt and mj2 vt table 3 shows the mj1 vt parameters for some example phrasepairs in the arabicenglish taskyou x 1 which is the closest aqrb 10 international trade tjarp ealmyp 08 the foreign ministry wzarp xarjyp 06 arab league jamep dwl erbyp 04 table 3 mj1 parameters for ae phrasepairsto validate alignment under a ppi we mea sure performance of the ttm word alignmentson frenchenglish and chinese english as desired the alignment recall and alignment error rate improve modestly while alignment preci sion remains constantthis suggests that themodels allow more words to be aligned and thus i am prove the recall mj2 gives a further improvementin ar and aer relative to mj1alignment preci reordering metrics frneng chneng ap ar aer ap ar aer none 942 848 100 851 471 393 mj1 vt 941 868 91 853 494 375 mj2 vt 939 874 89 853 509 363 table 4 alignment performance with reorderingsion depends on the quality of the word alignments within the phrasepairs and does not change muchby allowing phrase reorderingthis experiment val idates the estimation procedure based on the phrase alignments however we do not advocate the use of ttm as an alternate word alignment techniquewe perform our translation experiments on the large data track of the nist arabictoenglish andchinesetoenglish mt tasks we report re sults on the nist 2002 2003 and 2004 evaluation test sets 141 exploratory experimentsin these experiments the training data is restricted to fbis bitext in ce and the news bitexts in aethe bitext consists of chunk pairs aligned at sentence and subsentence level in ae the training bitext consists of 38m english words 32m arabic words and 137k chunk pairsin ce the training bitext consists of 117m english words 89m chinese words and 674k chunk pairsour chinese text processing consists of word seg mentation followed bygrouping of numbersfor arabic our text pro cessing consisted of a modified buckwalter analysis followed by post processing to sep arate conjunctions prepostions and pronouns andalw deletionthe english text is processed us ing a simple tokenizer based on the text processing utility available in the the nist mteval toolkitthe language model training data consistsof approximately 400m words of english text de rived from xinhua and afp the english side of fbis the un and ae news texts and the online archives of the peoples dailytable 5 gives the performance of the mj1 andmj2 reordering models when translation is per formed using a 4gram lmwe report performance on the 02 03 04 test sets and the combined test set 1httpwwwnistgovspeechtestsmt 165 1 cxyou cxyou cxyou cxyou cxyou cxyou cxyou 2 cxyou cxyoucxyou cxyou cxyou cxyou cxyou cxyou table 2 ml parameter estimates for mj2 modelreordering bleu arabicenglish chineseenglish 02 03 04 all 02 03 04 all none 375 403 368 378 06 242 237 260 250 05 mj1 flat 404 439 394 407 06 257 245 274 262 05 mj1 vt 413 448 403 416 06 258 245 278 265 05 mj2 flat 410 444 397 411 06 264 249 277 267 05 mj2 vt 417 453 406 420 06 265 249 279 268 05 table 5 performance of mj1 and mj2 reordering models with a 4gram lmfor the combined set wealso show the 95 bleu confidence interval com puted using bootstrap resampling row 1 gives the performance when no reordering model is usedthe next two rows show the in fluence of the mj1 reordering model in row 2 a flat probability of 1 005 is used for all phrasepairs in row 3 a reordering probability isestimated for each phrasepair using viterbi train ing the last two rows show the effect ofthe mj2 reordering model row 4 uses flat proba bilities 005 2 001 for all phrasepairs row 5 applies reordering probabilities estimating with viterbi training for each phrasepair on both languagepairs we observe that reorder ing yields significant improvementsthe gains from phrase reordering are much higher on ae relative to ce this could be related to the fact that the word order differences between english and arabic are much higher than the differences between englishand chinesemj1 vt outperforms flat mj1 show ing that there is value in estimating the reordering parameters from bitextfinally the mj2 vt model performs better than the flat mj2 model but onlymarginally better than the mj1 vt modelthere fore estimation does improve the mj2 model but allowing reordering beyond a window of 1 phrase is not useful when translating either arabic or chinese into english in this frameworkthe flat mj1 model outperforms the no reordering case and the flat mj2 model is better than the flat mj1 model we hypothesize that phrase reordering increases search space of translations thatallows the language model to select a higher qual ity hypothesisthis suggests that these models of phrase reordering actually require strong languagemodels to be effectivewe now investigate the inter action between language models and reorderingour goal here is to measure translation performance of reordering models over variable span n gram lms we observe that both mj1 and mj2 models yield higher improvements under higher order lms eg on ae gains under 3g are higher than the gains with 2g reordering bleu ae ce 2g 3g 4g 2g 3g 4g none 210 368 378 161 248 250 mj1 vt 234 404 416 162 259 265 mj2 vt 235 406 420 160 261 268 table 6 reordering with variable span ngram lms on eval020304 setwe now measure performance of the reorder ing models across the three test set genres used in the nist 2004 evaluation news editorials andspeecheson ae mj1 and mj2 yield larger i am provements on news relative to the other genreson ce the gains are larger on speeches and ed itorials relative to newswe hypothesize that thephrasepair inventory reordering models and lan guage models could all have been biased away from the test set due to the training datathere may also be less movement across these other genres166 reordering bleu ae ce news eds sphs news eds sphs none 411 308 333 236 259 308 mj1 vt 456 326 357 248 278 333 mj2 vt 462 327 355 248 278 337 table 7 performance across eval 04 test genresbleu arabicenglish chineseenglish reordering 02 03 04n 02 03 04n none 402 423 433 289 274 273 mj1 vt 431 450 456 302 282 289 metbasic 448 472 482 313 303 303 metibm1 452 482 497 318 307 310 table 8 translation performance on large bitexts42 scaling to large bitext training setswe here describe the integration of the phrase re ordering model in an mt system trained on largebitextsthe text processing and language models have been described in 41alignment mod els are trained on all available bitext and word alignments are obtained over the bitextphrasepairs are then extracted from the word alignments mj1 model parameters are estimated over all bitext on ae and over the nonun bitext on cefinally we use minimum error training to train loglinear scaling fac tors that are applied to the wfsts in equation 104news is used as the met training settable 8 reports the performance of the systemrow 1 gives the performance without phrase re ordering and row 2 shows the effect of the mj1 vt modelthe mj1 vt model is used in an initial decoding pass with the fourgram lm to generate translation latticesthese lattices are then rescored under parameters obtained using met and 1000best lists are generatedthe 1000best lists are augmented with ibm model1 scores and then rescored with a second setof met parametersrows 3 and 4 show the perfor mance of the metbasic and metibm1 modelswe observe that the maximum likelihood phrasereordering model yields significantly improved translation performance relative to the mono tone phrase order translation baselinethis confirms the translation performance improvements found over smaller training bitextswe also find additional gains by applying met to optimize the scaling parameters that are applied to the wfst component distributions within the ttmin this procedure the scale factor applied to the mj1 vt phrase translation and re ordering component is estimated along with scale factors applied to the other model components in other words the mlestimated phrase reorderingmodel itself is not affected by met but the likeli hood that it assigns to a phrase sequence is scaled by a single discriminatively optimized weightthe improvements from met demonstrate that the mj1 vt reordering models can be incorporated within a discrimi native optimized translation system incorporating a variety of models and estimation proceduresin this paper we have described local phrase reorder ing models developed for use in statistical machine translationthe models are carefully formulated so that they can be implemented as wfsts and we show how the models can be incorporated into the translation template model to perform phrasealignment and translation using standard wfst operationsprevious approaches to wfstbased re ordering con structed permutation acceptors whose state spaces grow exponentially with the length of the sentence to be translatedas a result these acceptors have to be pruned heavily for use in translationin contrast ourmodels of local phrase movement do not grow explosively and do not require any pruning or approx imation in their constructionin other related workbangalore and ricardi have trained wf sts for modeling reordering within translation their wfst parses word sequences into trees containing reordering information which are then checked for wellformed bracketsunlike this approach our model formulation does not use a tree representation and also ensures that the output sequences are validpermutations of input phrase sequences we empha size again that the probability distribution induced over reordered phrase sequences is not degenerateour reordering models do resemble those of in that we 167 treat the reordering as a sequence of jumps relativeto the original phrase sequence and that the likelihood of the reordering is assigned through phrase pair specific parameterized modelswe note thatour implementation allows phrase reordering beyond simply a 1phrase window as was done by till mannmore importantly our model implements a generative model of phrase reordering which can be incorporated directly into a generative model of theoverall translation processthis allows us to per form embeddedthemstyle parameter estimation in which the parameters of the phrase reordering model are estimated using statistics gathered under the complete model that will actually be used in translationwe believe that this estimation of model parameters directly from phrase alignments obtainedunder the phrase translation model is a novel contri bution prior approaches derived the parameters of the reordering models from word aligned bitext eg within the phrase pair extraction procedurewe have shown that these models yield improve ments in alignment and translation performance on arabicenglish and chineseenglish tasks and that the reordering model can be integrated into largeevaluation systemsour experiments show that discriminative training procedures such minimum er ror training also yield additive improvements by tuning ttm systems which incorporate mltrained reordering modelsthis is essential for integrating our reordering model inside an evaluation systemwhere a variety of techniques are applied simultane ouslythe mj1 and mj2 models are extremely simple models of phrase reorderingdespite their sim plicity these models provide large improvements in bleu score when incorporated into a monotone phrase order translation systemmoreover they can be used to produced translation lattices for use by more sophisticated reordering models that allow longer phrase order movementfuture work will build on these simple structures to produce more powerful models of word and phrase movement in translation
H05-1021
local phrase reordering models for statistical machine translationwe describe stochastic models of local phrase movement that can be incorporated into a statistical machine translation systemthese models provide properly formulated nondeficient probability distributions over reordered phrase sequencesthey are implemented by weighted finite state transducerswe describe themstyle parameter reestimation procedures based on phrase alignment under the complete translation model incorporating reorderingour experiments show that the reordering model yields substantial improvements in translation performance on arabictoenglish and chinesetoenglish mt taskswe also show that the procedure scales as the bitext size is increasedwe present a polynomialtime strategywe define two local reordering models for their translation template model in the first one called mj1 only adjacent phrases are allowed to swap and the movement has to be done within a window of 2
extracting product features and opinions from reviews consumers are often forced to wade through many online reviews inorder to make an informed prod uct choice this paper introducesopine an unsupervised informationextraction system which mines reviews in order to build a model of important product features their evalu ation by reviewers and their relative quality across products compared to previous work opine achieves 22 higher precision on the feature extraction task opines novel use ofrelaxation labeling for finding the semantic orientation of words in con text leads to strong performance on the tasks of finding opinion phrases and their polarity the web contains a wealth of opinions about products politicians and more which are expressed in newsgroupposts review sites and elsewhereas a result the prob lem of opinion mininghas seen increasing attention over the last three years from and many othersthis paper focuses on product reviews though our methods apply to a broader range of opinionsproduct reviews on web sites such as amazoncom and elsewhere often associate metadata with each review indicating how positive it is using a 5starscale and also rank products by how they fare in the re views at the sitehowever the readers taste may differ from the reviewersfor example the reader may feel strongly about the quality of the gym in a hotel whereasmany reviewers may focus on other aspects of the ho tel such as the decor or the locationthus the reader is forced to wade through a large number of reviews looking for information about particular features of interestwe decompose the problem of review mining into the following main subtasks i identify product featuresiiidentify opinions regarding product featuresiiidetermine the polarity of opinionsivrank opinions based on their strengththis paper introduces opine an unsupervised infor mation extraction system that embodies a solution to eachof the above subtasksopine is built on top of the know itall web informationextraction system as detailed in section 3given a particular product and a corresponding set of reviews opine solves the opinion mining tasks outlinedabove and outputs a set of product features each accom panied by a list of associated opinions which are ranked based on strength this output information can then be used to gen erate various types of opinion summariesthis paper focuses on the first 3 review mining sub tasks and our contributions are as follows 1we introduce opine a reviewmining system whosenovel components include the use of relaxation labeling to find the semantic orientation of words in the context of given product features and sentencesreviewmining system and find that opines precision on the feature extraction task is 22 better though its recall is 3 lower on hus data setswe show that 13 of this increase in precision comes from using opines feature assessment mechanism on review data while the rest is due to web pmi statistics3while many other systems have used extracted opinion phrases in order to determine the polarity of sentences or documents opine is the first to report its precision andrecall on the tasks of opinion phrase extraction and opin ion phrase polarity determination in the context of known product features and sentenceson the first task opinehas a precision of 79 and a recall of 76on the sec ond task opine has a precision of 86 and a recall of 89339 input product class c reviews r output set of feature ranked opinion list tuples rparsereviews efindexplicitfeatures ofindopinions co clusteropinions ifindimplicitfeatures rorankopinions outputtuples figure 1 opine overviewthe remainder of this paper is organized as follows section 2 introduces the basic terminology section 3 gives an overview of opine describes and evaluates its main components section 4 describes related work and section 5 presents our conclusiona product class is a set of products opine extracts the following types of prod uct features properties parts features of product parts related concepts parts and properties of related conceptsrelated concepts are concepts relevant to the customersexperience with the main product the relation ships between the main product and related concepts are typically expressed as verbs or prepositions features can be explicit or i am plicit opine also extracts opinion phrases which are adjec tive noun verb or adverb phrases representing customer opinionsopinions can be positive or negative and vary in strength this section gives an overview of opine and describes its components and their experimental eval uationgoal given product class c with instances i and reviews r opines goal is to find a set of tuples st f f and oi oj o where a f is the set of product class features in r b o is the set of opinion phrases in r c f is a feature of a particular product instanced o is an opinion about f in a particular sentenced the opinions associated with each feature f are ranked based on their strengthsolution the steps of our solution are outlined in figure 1 aboveopine parses the reviews using mini par and applies a simple pronounresolution module to parsed review dataopine then uses the datato find explicit product features opines feature as sessor and its use of web pmi statistics are vital for the extraction of highquality features opine then identifies opinion phrases associated with features in eand finds their polarityopines novel use of relaxationlabeling techniques for determining the semantic orien tation of potential opinion words in the context of given features and sentences leads to high precision and recall on the tasks of opinion phrase extraction and opinion phrase polarity extraction in this paper we only focus on the extraction of explicit features identifying corresponding customer opin ions about these features and determining their polaritywe omit the descriptions of the opinion clustering i am plicit feature generation and opinion ranking algorithms301 the knowitall systemopine is built on top of knowitall a webbaseddomainindependent information extraction system given a set of relations of interestknowitall instantiates relationspecific generic extrac tion patterns into extraction rules which find candidate factsknowitalls assessor then assigns a probability to each candidatethe assessor uses a form of pointwisemutual information between phrases that is esti mated from web search engine hit counts it computes the pmi between each fact and automatically generated discriminator phrases relationship in the context of the scanner classgiven fact f and discriminator d the computed pmi score is pmi hitshitshits the pmi scores are converted to binary features for anaive bayes classifier which outputs a probability asso ciated with each fact 31 finding explicit featuresopine extracts explicit features for the given productclass from parsed review datafirst the system recur sively identifies both the parts and the properties of the given product class and their parts and properties in turncontinuing until no candidates are foundthen the sys tem finds related concepts as described in and extracts their parts and propertiestable 1 shows that each feature type contributes to the set of final features explicit features examples total properties scannersize 7 parts scannercover 52 features of parts batterylife 24 related concepts scannerimage 9 related conceptsfeatures scannerimagesize 8 table 1 explicit feature information 340in order to find parts and properties opine first ex tracts the noun phrases from reviews and retains thosewith frequency greater than an experimentally set thresholdopines feature assessor which is an instantia tion of knowitalls assessor evaluates each noun phrase by computing the pmi scores between the phrase and meronymy discriminators associated with the product class opine distinguishes parts from properties using wordnets isa hi erarchy and morphological cues 32 experiments explicit feature extractionin our experiments we use sets of reviews for 7 product classes which include the pub licly available data sets for 5 product classes from hus system is the review mining sys tem most relevant to our workit uses association rulemining to extract frequent review noun phrases as featuresfrequent features are used to find potential opinion words and the system uses word net synonymsantonyms in conjunction with a set of seedwords in order to find actual opinion wordsfinally opinion words are used to extract associated infrequent fea turesthe system only extracts explicit featureson the 5 datasets in opines precision is 22 higher than hus at the cost of a 3 re call dropthere are two important differences between opine and hus system a opines feature assessor uses pmi assessment to evaluate each candidate feature and b opine incorporates web pmi statistics in addition to review data in its assessmentin the following we quantify the performance gains from a and ba in order to quantify the benefits of opines feature assessor we use it to evaluate the features extracted by hus algorithm on review data the feature assessor improves hus precision by 6b in order to evaluate the impact of using web pmi statistics we assess opines features first on reviews and then on reviews in conjunction with the web web pmi statistics increase precision by an av erage of 145overall 13 of opines precision increase over hus system comes from using pmi assessment on reviews and the other 23 from the use of the web pmi statisticsin order to show that opines performance is robustacross multiple product classes we used two sets of reviews downloaded from tripadvisorcom for hotels and amazoncom for scannerstwo annotators la beled a set of unique 450 opine extractions as correct or incorrectthe interannotator agreement was 86the extractions on which the annotators agreed were usedto compute opines precision which was 89fur data explicit feature extraction precision hu huar huarw opr opine d1 075 005 017 007 019 d2 071 003 019 008 022 d3 072 003 025 009 023 d4 069 006 022 008 025 d5 074 008 019 004 021 average 072 006 020 007 022table 2 precision comparison on the explicit feature extraction taskopines precision is 22 better than husprecision web pmi statistics are responsible for 23 of the pre cision increaseall results are reported with respect to hus data explicit feature extraction recall hu huar huarw opr opine d1 082 016 008 014 002 d2 079 017 009 013 006 d3 076 012 008 015 003 d4 082 019 004 017 003 d5 080 016 006 012 002 average 080 016 007 014 003table 3 recall comparison on the explicit feature extraction taskopines recall is 3 lower than the recall of hus original system all results are reported with respect to hus thermore the annotators extracted explicit features from800 review sentences the inter annotator agreement was 82opines recall on the set of 179 features on which both annotators agreed was 7333 finding opinion phrases and their polaritythis subsection describes how opine extracts potentialopinion phrases distinguishes between opinions and non opinions and finds the polarity of each opinion in thecontext of its associated feature in a particular review sen tence331 extracting potential opinion phrasesopine uses explicit features to identify potential opinion phrasesour intuition is that an opinion phrase as sociated with a product feature will occur in its vicinitythis idea is similar to that of and but instead of using a window of size k or the output of a noun phrase chunker opine takes advantage of the syntactic dependencies computed by theminipar parserour intuition is embodied by 10 ex traction rules some of which are shown in table 4if an explicit feature is found in a sentence opine applies the extraction rules in order to find the heads of potentialopinion phraseseach head word together with its modi 341 fiers is returned as a potential opinion phrase1extraction rules examples if po m scanner if po o lamp has if po p i this scanner if po p program table 4 examples of domainindependent rules forthe extraction of potential opinion phrasesnota tion popotential opinion mmodifier npnoun phrasessubject ppredicate oobjectextracted phrases are en closed in parenthesesfeatures are indicated by the typewriter fontthe equality conditions on the lefthand side use pos headrule templates rules dep m v st dep depv st m ov st dep dep v st m o table 5 dependency rule templates for finding words w wwith related so labels opine instantiates these templates in order to obtain extraction rulesnotation depdependent mmodifier oobject vwwwordsopine examines the potential opinion phrases in order to identify the actual opinionsfirst the system finds thesemantic orientation for the lexical head of each poten tial opinion phraseevery phrase whose head word has a positive or negative semantic orientation is then retained as an opinion phrasein the following we describe how opine finds the semantic orientation of words332 word semantic orientation opine finds the semantic orientation of a word w in the context of an associated feature f and sentence s we restate this task as follows task given a set of semantic orientation labels a set of reviews and a set of tuples where w is a potential opinion word associated with feature f in sentence s assign a so label to each tuple for example the tuple would be assigned a negative so labelnote we use wordto refer to a potential opinion word w and featureto refer to the word or phrase which represents the explicit feature f solution opine uses the 3step approach below 1given the set of reviews opine finds a so label foreach word w 2given the set of reviews and the set of so labels forwords w opine finds a so label for each pair1the tuples in table 4 are automatically generated from minipars output3given the set of so labels for pairs opinefinds a so label for each input tupleeach of these subtasks is cast as an unsupervised col lective classification problem and solved using the samemechanismin each case opine is given a set of objects and a set of labels opine then searches for a global assignment of la bels to objectsin each case opine makes use of local constraints on label assignments a key insight in opine is that the problem of searching for a global so label assignment to words pairs or tupleswhile trying to satisfy as many local constraints on as signments as possible is analogous to labeling problems in computer vision opine uses a wellknown computer vision technique relaxation labeling in order to solve the three subtasks described above333 relaxation labeling overview relaxation labeling is an unsupervised classification technique which takes as input a a set of objects b a set of labels c initial probabilities for each objects possible labels d the definition of an object os neighborhood e the definition of neighborhood features f the definition of a support function for an object labelthe influence of an object os neighborhood on its label l is quantified using the support functionthe support function computes the probability of the label l being assigned to o as a function of os neighborhood fea turesexamples of features include the fact that a certainlocal constraint is satisfied relaxation labeling is an iterative procedure whoseoutput is an assignment of labels to objectsat each itera tion the algorithm uses an update equation to reestimate the probability of an object label based on its previous probability estimate and the features of its neighborhoodthe algorithm stops when the global label assignment stays constant over multiple consecutive iterationswe employ relaxation labeling for the following rea sons a it has been extensively used in computervision with good results b its formalism allows for many typesof constraints on label assignments to be used simulta neouslyas mentioned before constraints are integratedinto the algorithm as neighborhood features which influ ence the assignment of a particular label to a particular objectopine uses the following sources of constraints 342 a conjunctions and disjunctions in the review textb manuallysupplied syntactic dependency rule templates the templates are automatically instantiated by our system with different dependency re lationships in order to obtain syntactic dependency rules which find words with related so labelsc automatically derived morphological relationships d wordnetsupplied synonymy antonymy isa andmorphological relationships between wordsfor exam ple clean and neat are synonyms and so they are likely to have similar so labelseach of the so label assignment subtasks previously identified is solved using a relaxation labeling stepin the following we describe in detail how relaxation labeling is used to find so labels for words in the given review sets334 finding so labels for words for many words a word sense or set of senses is used throughout the review corpus with a consistently positive negative or neutral connotation thus in many cases a word ws so label in the context of a feature f and sentence s will be the same as its so label in the context of other features and sentencesin the following we describe how opines relaxation la beling mechanism is used to find a words dominant so label in a set of reviewsfor this task a words neighborhood is defined as the set of words connected to it through conjunctionsdisjunctions and all other relationships previously intro duced as sources of constraintsrl uses an update equation to reestimate the probability of a word label based on its previous probabil ity estimate and the features of its neighborhood at iteration m let q denote the support function for label l of w and let p l denote the probability that l is the label of w p l is computed as follows rl update equation p l p l p lp l where lpos neg neutral and 0 is an experimentally set constant keeping the numerator and probabilities positiverls output is an assignment of dominant so labels to wordsin the following we describe in detail the initialization step the derivation of the support function formula and the use of neighborhood featuresrl initialization step opine uses a version of turneys pmibased approach in order to de rive the initial probability estimates l for a subset s of the wordsopine computes a so score so for each w in s as the difference between the pmi of w with positive keywords and the pmi of w with negative keywords when so is small or w rarely cooccurs with the key words w is classified as neutralif so 0 then w is positive otherwise w is negativeopine then uses the labeled s set in order to compute prior probabilities p l l pos neg neutral by computing the ratio between the number of words in s labeled land ssuch probabilities are used as initial probabil ity estimates associated with the labels of the remaining wordssupport function the support function computes the probability of each label for word w based on the labels of objects in ws neighborhood n let ak wj n 0 k 3n rep resent one of the potential assignments of labels to the words in n let p denote the probability of thisparticular assignment at iteration m the support for la bel l of word w at iteration m is q 3nx k1 p lak p we assume that the labels of ws neighbors are inde pendent of each other and so the formula becomes q 3nx k1 p lakny j1 p lj every p lj term is the estimate for theprobability that l lj the p lak term quantifies the influence of a particular label assignment to ws neighborhood over ws labelin the following we describe how we estimate this termneighborhood features each type of word relationship which constrains the assignment of so labels to words is mapped by opine to a neighborhood featurethismapping allows opine to use simultaneously use multi ple independent sources of constraints on the label of aparticular wordin the following we formalize this map pinglet t denote the type of a word relationship in are and let akt represent the labelsassigned by ak to neighbors of a word w which are con nected to w through a relationship of type t we have ak t akt and p lak p l t akt for each relationship type t opine defines a neighborhood feature ft which computes p lakt the probability that ws label is l given akt p l t akt isestimated combining the information from various fea tures about ws label using the sigmoid function 343 p lak ci where c0 cj are weights whose sum is 1 and which reflect opine s confidence in each type of featuregiven word w label l relationship type t and neigh borhood label assignment ak let nt represent the subsetof ws neighbors connected to w through a type t rela tionshipthe feature ft computes the probability that ws label is l given the labels assigned by ak to wordsin nt using bayess law and assuming that these la bels are independent given l we have the following formula for ft at iteration m ft p lnt y j1 p l p l is the probability that word wj has label lj if wj and w are linked by a relationship of type t and w has label l we make the simplifying assumption that this probability is constant and depends only of t l and l not of the particular words wj and w for each tuple llj pos neg neutral opine buildsa probability table using a small set of bootstrapped pos itive negative and neutral words335 finding so labels this subtask is motivated by the existence of frequent words which change their so label based on associatedfeatures but whose so labels in the context of the respec tive features are consistent throughout the reviews in order to solve this task opine first assigns each pair an initial so label which is ws so labelthe system then executes a relaxation labeling step duringwhich syntactic relationships between words and respec tively between features are used to update the default so labels whenever necessaryfor example appears in the proximity of if roomand fanare conjoined by and this suggests that hotand brokenhave similar so labels in the context of their respective featuresif brokenhas a strongly negativesemantic orientation this fact contributes to opines be lief that hotmay also be negative in this contextsince occurs in the vicinity of other such phrases hotacquires a negative so label in the context of room336 finding so labels this subtask is motivated by the existence of pairs for which ws orientation changes based on the sentence in which the pair appears in order to solve this subtask opine first assigns each tuple an initial label which is simply the so la bel for the pairthe system then uses syntactic relationships between words and respectively features in order to update the so labels when necessaryfor example in the sentence i hated the big drafty room because i ended up freezing bigand hatesatisfy condition 2 in table 5 and therefore opine expects themto have similar so labelssince hatehas a strong neg ative connotation bigacquires a negative so label in this contextin order to correctly update so labels in this last step opine takes into consideration the presence of negation modifiersfor example in the sentence i do not like a large scanner either opine first replaces the positive pair with the negative labeled pair and then infers that largeis likely to have a negative so label in this context337 identifying opinion phrases after opine has computed the most likely so labels for the head words of each potential opinion phrase in thecontext of given features and sentences opine can ex tract opinion phrases and establish their polarityphraseswhose head words have been assigned positive or nega tive labels are retained as opinion phrasesfurthermorethe polarity of an opinion phrase o in the context of a fea ture f and sentence s is given by the so label assigned to the tuple f s 34 experimentsin this section we evaluate opines performance on thefollowing tasks finding so labels of words in the context of known features and sentences distinguishing between opinion and nonopinion phrases in the context of known features and sentences finding the correct polarityof extracted opinion phrases in the context of known fea tures and sentences while other systems such as have addressed these tasks to some degree opine is the first to report resultswe first ran opine on 13841 sentences and 538 previously extracted featuresopine searched for a so label assignment for 1756 different words in the context of the given features and sentenceswe compared opine against two baseline meth ods pmi and hupmi is an extended version of smethod for finding the so label of a phrase for a given tuple pmi ignores the sentence generates a phrase based on the word and the fea ture clean roomand finds its so label using pmi statisticsif unsure of the label pmi tries to find the orientation of the potential opinion word insteadthe search engine queries use domainspecific keywords which are dropped if they 344 lead to low countshu is a wordnetbased method for finding a words contextindependent semantic orientationit extends hus adjective labeling method in a number of ways in order to handle nouns verbs and adverbs in addition to adjectives and in order to improve coveragehus method starts with two sets of positive and negative words and iteratively grows each one by including synonyms andantonyms from wordnetthe final sets are used to pre dict the orientation of an incoming wordtype pmi hu opine p r p r p r adj 073 091 002 017 007 003 nn 063 092 004 024 011 008 vb 071 088 003 012 001 001 adv 082 092 002 001 006 001 average 072 091 003 014 006 003 table 6 finding so labels of potential opinion words in the context of given product features and sentencesopines precision is higher than that of pmi and huall results are reported with respect to pmi notation adjadjectives nnnouns vbverbs advadverbs 341 experiments so labelson the task of finding so labels for words in the con text of given features and review sentences opine obtains higher precision than both baseline methods at a smallloss in recall with respect to pmias described be low this result is due in large part to opines ability to handle contextsensitive opinion wordswe randomly selected 200 tuples for each word type andobtained a test set containing 800 tuplestwo annota tors assigned positive negative and neutral labels to eachtuple we re tained the tuples on which the annotators agreed as the gold standardwe ran pmi and hu on the test data and compared the results against opines results on the same datain order to quantify the benefits of each of the threesteps of our method for finding so labels we also compared opine with a version which only finds so la bels for words and a version which finds so labels for words in the context of given features but does not take into account given sentenceswe have learned from this comparison that opines precision gain over pmi andhu is mostly due to to its ability to handle context sensitive words in a large number of casesalthough hu does not handle contextsensitive so label assignment its average precision was reasonable and better than that of pmifinding a words so label is good enough in the case of strongly positiveor negative opinion words which account for the major ity of opinion instancesthe methods loss in recall is due to not recognizing words absent from wordnet or not having enough information to classify some words in wordnetpmi typically does well in the presence of strongly positive or strongly negative wordsits high recall iscorrelated with decreased precision but overall this sim ple approach does wellpmis main shortcoming is misclassifying terms such as basicor visiblewhich change orientation based on context342 experiments opinion phrases in order to evaluate opine on the tasks of opinion phrase extraction and opinion phrase polarity extraction in the context of known features and sentences we used aset of 550 sentences containing previously extracted fea turesthe sentences were annotated with the opinion phrases corresponding to the known features and with the opinion polaritywe compared opine with pmi and hu on the tasks of interestwe found that opine hadthe highest precision on both tasks at a small loss in re call with respect to pmiopines ability to identify a words so label in the context of a given feature and sentence allows the system to correctly extract opinionsexpressed by words such as bigor small whose se mantic orientation varies based on contextmeasure pmi hu opine op extraction precision 071 006 008 op extraction recall 078 008 002 op polarity precision 080 004 006 op polarity recall 093 007 004 table 7 extracting opinion phrases and opinion phrase polarity corresponding to known features and sentencesopines precision is higher than that of pmi and of huall results are reported with respect to pmithe key components of opine described in this paper are the pmi feature assessment which leads to highprecisionfeature extraction and the use of relaxationlabeling in or der to find the semantic orientation of potential opinionwordsthe reviewmining work most relevant to our re search is that of and both identify product features from reviews but opine significantly improves on both does not assess candidate features so its precision is lower than opines employsan iterative semiautomatic approach which requires human input at every iterationneither model explicitly ad dresses composite or implicit featuresother systems also look at web product reviews but they do not extract 345 opinions about particular product featuresopines use of meronymy lexicosyntactic patterns is similar to that of many others from to recognizing the subjective character and polarity of words phrases or sentences has been addressed by many authors including most recently reports on the use of spin models to infer the semantic orientation of wordsthe papers global optimization approach and use of multiple sources of constraints on a words semantic orientation is similar to ours but the mechanism differs and they currently omit the use of syntactic informationsubjective phrases are used by and others in order to classify reviews or sentences as positive or negativeso far opines focus has been on extracting and analyzing opinion phrases corresponding to specific features in specific sentences rather than on determining sentence or review polarityopine is an unsupervised information extraction systemwhich extracts finegrained features and associated opinions from reviewsopines use of the web as a corpus helps identify product features with improved preci sion compared with previous workopine uses a novel relaxationlabeling technique to determine the semantic orientation of potential opinion words in the context ofthe extracted product features and specific review sentences this technique allows the system to identify cus tomer opinions and their polarity with high precision and recallwe would like to thank the knowitall project and theanonymous reviewers for their commentsmichael gamon costas boulis and adam carlson have also pro vided valuable feedbackwe thank minquing hu andbing liu for providing their data sets and for their com mentsfinally we are grateful to bernadette minton and fetch technologies for their help in collecting additional reviewsthis research was supported in part by nsf grant iis0312988 darpa contract nbchd030010 onr grant n000140210324 as well as gifts from google and the turing center
H05-1043
extracting product features and opinions from reviewsconsumers are often forced to wade through many online reviews in order to make an informed product choicethis paper introduces opine an unsupervised informationextraction system which mines reviews in order to build a model of important product features their evaluation by reviewers and their relative quality across productscompared to previous work opine achieves 22 higher precision on the feature extraction taskopine novel use of relaxation labeling for finding the semantic orientation of words in context leads to strong performance on the tasks of finding opinion phrases and their polarityour dictionarybased method utilizes wikipedia to find an entry page for a phrase or a single term in a querywe not only analyze polarity of opinions regarding product features but also rank opinions based on their strengthwe present a method that identifies product features for using corpus statistics wordnet relations and morphological cuesthe relevance ranking and extraction was performed with pointwise mutual information
recognizing contextual polarity in phraselevel sentiment analysis this paper presents a new approach to phraselevel sentiment analysis that firstdetermines whether an expression is neu tral or polar and then disambiguates the polarity of the polar expressions with thisapproach the system is able to automat ically identify the contextual polarity for a large subset of sentiment expressionsachieving results that are significantly bet ter than baseline sentiment analysis is the task of identifying positive and negative opinions emotions and evaluationsmost work on sentiment analysis has been done atthe document level for example distinguishing pos itive from negative reviewshowever tasks suchas multiperspective question answering and sum marization opinionoriented information extraction and mining product reviews require sentencelevelor even phraselevel sentiment analysisfor exam ple if a question answering system is to successfully answer questions about peoples opinions it must be able to pinpoint expressions of positive and negative sentiments such as we find in the sentences below african observers generally approved of his victory while western governments denouncedit a succession of officers filled the tv screen to say they supported the people and that the killings were not tolerable we do not hate the sinnerhe says but we hatethe sina typical approach to sentiment analysis is to start with a lexicon of positive and negative words and phrasesin these lexicons entries are tagged with their a priori prior polarity out of context doesthe word seem to evoke something positive or some thing negativefor example beautiful has a positiveprior polarity and horrid has a negative prior polar ityhowever the contextual polarity of the phrase in which a word appears may be different from thewords prior polarityconsider the underlined polar ity words in the sentence below philip clapp president of the national environ ment trust sums up well the general thrust of the reaction of environmental movements there is noreason at all to believe that the polluters are sud denly going to become reasonableof these words trustwellreasonand rea sonablehave positive prior polarity but they are not all being used to express positive sentimentsthe word reasonis negated making the contex tual polarity negativethe phrase no reason at all to believechanges the polarity of the proposition that follows because reasonablefalls within thisproposition its contextual polarity becomes nega tivethe word trustis simply part of a referringexpression and is not being used to express a sentiment thus its contextual polarity is neutralsimi larly for polluters in the context of the article it simply refers to companies that polluteonly wellhas the same prior and contextual polaritymany things must be considered in phraselevel sentiment analysisnegation may be local or involve longerdistance dependencies such as the negation of the proposition or the negation of the subject in addition certain phrases that contain negation words intensify ratherthan change polarity contextual polarity may also be influenced by modality or not real no reason at all to believe is irrealis for example word sense the syntactic role of a word in the sen tence and diminishers such as little for amore detailed discussion of contextual polarity in fluencersthis paper presents new experiments in automat ically distinguishing prior and contextual polaritybeginning with a large stable of clues marked with prior polarity we identify the contextual polarity of the phrases that contain instances of those clues in the corpuswe use a twostep process that employs machine learning and a variety of featuresthe first step classifies each phrase containing a clue as neutral or polarthe second step takes all phrases marked in step one as polar and disambiguates theircontextual polarity with this approach the system is able to auto matically identify the contextual polarity for a large subset of sentiment expressions achieving resultsthat are significantly better than baselinein addition we describe new manual annotations of contextual polarity and a successful interannotator agree ment studyto create a corpus for the experiments below weadded contextual polarity judgments to existing annotations in the multiperspective question answering opinion corpus1 namely to the an notations of subjective expressions2a subjective expression is any word or phrase used to express an opinion emotion evaluation stance speculation 1the mpqa corpus is described in and available at nrrcmitreorgnrrcpublicationshtm2in the mpqa corpus subjective expressions are direct subjective expressions with nonneutral expression intensity plus all the expressive subjective elementsplease see for more details on the existing annotations in the mpqa corpusetc a general covering term for such states is private state in the mpqa cor pus subjective expressions of varying lengths are marked from single words to long phrasesfor this work our focus is on sentiment expressions positive and negative expressions of emo tions evaluations and stancesas these are types of subjective expressions to create the corpus we just needed to manually annotate the existing subjective expressions with their contextual polarityin particular we developed an annotationscheme3 for marking the contextual polarity of sub jective expressionsannotators were instructed to tag the polarity of subjective expressions as positive negative both or neutralthe positive tag is for positive emotions evaluations and stances the negative tag is for negative emotions eval uations and stances the both tag is applied to sentiment expres sions that have both positive and negative polaritythe neutral tag is used for all other subjective expressions those that express a different type of sub jectivity such as speculation and those that do not have positive or negative polaritybelow are examples of contextual polarity anno tationsthe tags are in boldface and the subjective expressions with the given tags are underlined thousands of coup supporters celebrated overnight waving flags blowing whistles the criteria set by rice are the following thethree countries in question are repressive and grave human rights violators besides politicians refer to good and evil only for purposes of intimidation and exaggeration jerome says the hospital feels no dif ferent than a hospital in the statesthe annotators were asked to judge the contextual polarity of the sentiment that is ultimately be ing conveyed by the subjective expression ie once the sentence has been fully interpretedthus the subjective expression they have not succeeded and 3the annotation instructions are available at httpwwwcspittedutwilson348 will never succeed was marked as positive in the sentence they have not succeeded and will never succeed in breaking the will of this valiant peoplethe reasoning is that breaking the will of a valiantpeople is negative hence not succeeding in break ing their will is positiveto measure the reliability of the polarity annotation scheme we conducted an agreement study with two annotators using 10 documents from the mpqa corpusthe 10 documents contain 447 subjective expressionstable 1 shows the contingency table for the two annotatorsjudgmentsoverall agreement is 82 with a kappa value of 072neutral positive negative both total neutral 123 14 24 0 161 positive 16 73 5 2 96 negative 14 2 167 1 184 both 0 3 0 3 6 total 153 92 196 6 447 table 1 agreement for subjective expressions for 18 of the subjective expressions at least oneannotator used an uncertain tag when marking po larityif we consider these cases to be borderline and exclude them from the study percent agreement increases to 90 and kappa rises to 084thus the annotator agreement is especially high when both are certainin total 15991 subjective expressions from 425 documents were annotated withcontextual polarity as described aboveof these sen tences 28 contain no subjective expressions 25 contain only one and 47 contain two or moreofthe 4247 sentences containing two or more subjec tive expressions 17 contain mixtures of positive and negative expressions and 62 contain mixturesof polar and neutral subjec tive expressionsthe annotated documents are divided into two setsthe first is a development set used for data exploration and feature developmentweuse the second set in 10fold crossvalidation experiments described belowfor the experiments in this paper we use a lexicon of over 8000 subjectivity cluessubjectivity clues arewords and phrases that may be used to express pri vate states ie they have subjective usages for this work only singleword clues are usedto compile the lexicon we began with a list of subjectivity clues from the words in this list were grouped in previous work according to their reliability as subjectivity clueswords that are subjective in most contexts were marked strongly subjective and those that may only have certain subjective usages were marked weakly subjective we expanded the list using a dictionary and a thesaurus and also added words from the generalinquirer positive and negative word lists which we judged to be potentially subjectivewe also gave the new words reliability tags either strongsubj or weaksubjthe next step was to tag the clues in the lexicon with their prior polarityfor words that came from positive and negative word lists welargely retained their original polarity either posi tive or negativewe assigned the remaining words one of the tags positive negative both or neutralby far the majority of clues 928 aremarked as having either positive or nega tive prior polarityonly a small number of clues are marked as having both positive and negative polarity69 of the clues in the lexicon are marked as neutralexamples of these are verbs such as feel look and think and intensifiers such asdeeply entirely and practicallythese words are included because although their prior polarity is neu tral they are good clues that a sentiment is beingexpressed in cluding them increases the coverage of the system349the goal of the experiments described below is to classify the contextual polarity of the expressions that contain instances of the subjectivity clues in our lexiconwhat the system specifically does is give each clue instance its own labelnote that thesystem does not try to identify expression bound ariesdoing so might improve performance and is a promising avenue for future research61 definition of the gold standardwe define the gold standard used to train and test the system in terms of the manual annotations described in section 2the gold standard class of a clue instance that is not in a subjective expression is neutral since the clue is not even in a subjective expression it is not contained in a sentiment expressionotherwise if a clue instance appears in just onesubjective expression then the class assigned to the clue instance is the class of the subjective expressionif a clue appears in at least one positive and one negative subjective expression then its class is bothif it is in a mixture of negative and neutral subjective expressions its classis negative if it is in a mixture of positive and neu tral subjective expressions its class is positive62 performance of a priorpolarity classifieran important question is how useful prior polarityalone is for identifying contextual polarityto answer this question we create a classifier that simply assumes that the contextual polarity of a clue in stance is the same as the clues prior polarity and weexplore the classifiers performance on the develop ment setthis simple classifier has an accuracy of 48from the confusion matrix given in table 2 we seethat 76 of the errors result from words with nonneutral prior polarity appearing in phrases with neu tral contextual polarity63 contextual polarity disambiguationthe fact that words with nonneutral prior polarity so frequently appear in neutral contexts led us to priorpolarity classifier neut pos neg both total neut 798 784 698 4 2284 pos 81 371 40 0 492 gold neg 149 181 622 0 952 both 4 11 13 5 33 total 1032 1347 1373 9 3761 table 2 confusion matrix for the priorpolarity classifier on the development setadopt a twostep approach to contextual polarity dis ambiguationfor the first step we concentrate on whether clue instances are neutral or polar in context for the second step we take all clue instances marked aspolar in step one and focus on identifying their contextual polarityfor both steps we develop classi fiers using the boostexter adaboosthm machine learning algorithm with5000 rounds of boostingthe classifiers are evalu ated in 10fold crossvalidation experiments631 neutralpolar classification the neutralpolar classifier uses 28 features listed in table 3word features word context is a bag of three word tokens the previous word the word itself and the next wordthe prior polarity and reliability class are indicated in the lexiconmodification features these are binary rela tionship featuresthe first four involve relationships with the word immediately before or after if theword is a noun preceded by an adjective if the preceding word is an adverb other than not if the pre ceding word is an intensifier and if the word itself is an intensifiera word is considered an intensifier if it appears in a list of intensifiers and if it precedesa word of the appropriate partofspeech the modify features involve the dependency parse tree for the sentence obtained by first parsing the sentence and then converting the tree into its dependency representation in a dependency representation every node in the tree structure is a surface word the edge be tween a parent and a child specifies the grammatical relationship between the two wordsfigure 1 shows 350 word features sentence features structure features word token strongsubj clues in current sentence count in subject binary word partofspeech strongsubj clues in previous sentence count in copular binary word context strongsubj clues in next sentence count in passive binary prior polarity positive negative both neutral weaksubj clues in current sentence count reliability class strongsubj or weaksubj weaksubj clues in previous sentence count modification features weaksubj clues in next sentence count document feature preceeded by adjective binary adjectives in sentence count document topic preceeded by adverb binary adverbs in sentence count preceeded by intensifier binary cardinal number in sentence binary is intensifier binary pronoun in sentence binary modifies strongsubj binary modal in sentence binary modifies weaksubj binary modified by strongsubj binary modified by weaksubj binary table 3 features for neutralpolar classification the human rights report a poses substantial challenge to usthe interpretation of good and evil det det det adj adj objsubj mod mod conj conjpobj pobj p p figure 1 the dependency tree for the sentence the humanrights report poses a substantial challenge to the us interpre tation of good and evilprior polarity is marked in parentheses for words that match clues from the lexiconan examplethe modifies strongsubjweaksubj fea tures are true if the word and its parent share an adj mod or vmod relationship and if its parent isan instance of a clue from the lexicon with strongsubjweaksubj reliabilitythe modified by strongsubjweaksubj features are similar but look for rela tionships and clues in the words childrenstructure features these are binary featuresthat are determined by starting with the word in stance and climbing up the dependency parse tree toward the root looking for particular relationships words or patternsthe in subject feature is true if we find a subj relationshipthe in copular feature is true if in subject is false and if a node along the pathis both a main verb and a copular verbthe in pas sive features is true if a passive verb pattern is found on the climbsentence features these are features that werefound useful for sentencelevel subjectivity classifi cation by wiebe and riloff they includecounts of strongsubj and weaksubj clues in the current previous and next sentences counts of adjectives and adverbs other than not in the current sen tence and binary features to indicate whether the sentence contains a pronoun a cardinal number and a modal other than willdocument feature there is one document feature representing the topic of the documenta doc ument may belong to one of 15 topics ranging fromspecific to more general topicstable 4 gives neutralpolar classification resultsfor the 28feature classifier and two simpler classi fiers that provide our baselinesthe first row in the table lists the results for a classifier that uses just one feature the word tokenthe second row showsthe results for a classifier that uses both the word to ken and the words prior polarity as featuresthe results for the 28feature classifier are listed in thelast rowthe 28feature classifier performs signifi cantly better than the two simpler classifiers as measured by accuracy polar fmeasure and neutral fmeasure it has an accuracy of 759 with a polar fmeasure of 634 and a neutral fmeasure of 821focusing on the metrics for polar expressions its interesting to note that using just the word token as a feature produces a classifier with a precision slightly better than the 28feature classifier but with a recall that is 20 loweradding a feature for the prior 351 word features word token word prior polarity positive negative both neutral polarity features negated binary negated subject binary modifies polarity positive negative neutral both notmod modified by polarity positive negative neutral both notmod conj polarity positive negative neutral both notmod general polarity shifter binary negative polarity shifter binary positive polarity shifter binary table 6 features for polarity classification polarity improves recall so that it is only 44 lower but this hurts precision which drops to 42 lower than the 28feature classifiers precisionit is only with all the features that we get the best result good precision with the highest recallthe clues in the priorpolarity lexicon have 19506 instances in the test setaccording to the28feature neutralpolar classifier 5671 of these in stances are polar in contextit is these clue instancesthat are passed on to the second step in the contex tual disambiguation process polarity classification632 polarity classification ideally this second step in the disambiguationprocess would be a threeway classification task determining whether the contextual polarity is positive negative or bothhowever although the major ity of neutral expressions have been filtered out by the neutralpolar classification in step one a numberstill remainso for this step the polarity classifica tion task remains fourway positive negative both and neutraltable 6 lists the features used by the polarity classifierword token and word prior polarity are un changed from the neutralpolar classifiernegated is a binary feature that captures whether the word is being locally negated its value is true if a negation word or phrase is found within the four preceedingwords or in any of the words children in the de pendency tree and if the negation word is not in a phrase that intensifies rather than negates the negated subject feature is true if the sub ject of the clause containing the word is negatedthe modifies polarity modified by polarity and conj polarity features capture specific relationships between the word instance and other polarity words it may be related toif the word and its parent in the dependency tree share an obj adj mod or vmod relationship the modifies polarity feature is set to the prior polarity of the words parent the modified by polarity featureis similar looking for adj mod and vmod relation ships and polarity clues within the words childrenthe conj polarity feature determines if the word is in a conjunctionif so the value of this feature is its siblings prior polarity figure 1 helps to illustrate these features modifies polarity isnegative for the word substantialmodified by po larity is positive for the word challengeand conj polarity is negative for the word goodthe last three polarity features look in a window of four words before searching for the presence ofparticular types of polarity influencersgeneral polarity shifters reverse polarity negative polarity shifters typically make the polarity of an expression negative positive polarity shifters typically make the polarity of an expression positive the polarity classification results for this second step in the contextual disambiguation process are given in table 5also listed in the table are resultsfor the two simple classifiers that provide our base linesthe first line in table 5 lists the results forthe classifier that uses just one feature the word tokenthe second line shows the results for the clas sifier that uses both the word token and the words prior polarity as featuresthe last line shows theresults for the polarity classifier that uses all 10 fea tures from table 6mirroring the results from step one the more complex classifier performs significantly better than the simpler classifiers as measured by accuracyand all of the fmeasuresthe 10feature classi fier achieves an accuracy of 657 which is 43 higher than the more challenging baseline providedby the word prior polarity classifierpositive f measure is 651 negative fmeasure is 772 and neutral fmeasure is 462 focusing on the metrics for positive and negative expressions we again see that the simpler classifiers 352 acc polar rec polar prec polar f neut rec neut prec neut f word token 736 453 722 557 899 740 812 wordpriorpol 742 543 686 606 857 764 807 28 features 759 568 716 634 870 777 821 table 4 results for step 1 neutralpolar classification positive negative both neutral acc rec prec f rec prec f rec prec f rec prec f word token 617 593 634 612 839 647 731 92 352 146 302 501 377 wordpriorpol 630 694 553 616 804 712 755 92 352 146 335 518 407 10 features 657 671 633 651 821 729 772 112 284 161 414 524 462 table 5 results for step 2 polarity classificationexperiment features removed ab1 negated negated subject ab2 modifies polarity modified by polarity ab3 conj polarity ab4 general negative and positive polarity shifters table 7 features for polarity classification take turns doing better or worse for precision andrecallusing just the word token positive precision is slightly higher than for the 10feature clas sifier but positive recall is 116 loweradd the prior polarity and positive recall improves but at the expense of precision which is 126 lower than for the 10feature classifierthe results for negative expressions are similarthe wordtoken classifier does well on negative recall but poorly on negative precisionwhen prior polarity is added negative recall improves but negative precision dropsit is only with the addition of the polarity features that we achieve both higher precisions and higher recallsto explore how much the various polarity featurescontribute to the performance of the polarity classifier we perform four experimentsin each experi ment a different set of polarity features is excluded and the polarity classifier is retrained and evaluatedtable 7 lists the features that are removed for each experimentthe only significant difference in performance in these experiments is neutral fmeasure when the modification features are removedthese ablation experiments show that the combination of features is needed to achieve significant results over baseline for polarity classificationmuch work on sentiment analysis classifies documents by their overall sentiment for example deter mining whether a review is positive or negative in contrast our ex periments classify individual words and phrasesa number of researchers have explored learning words and phrases with prior positive or negative polarity in contrast we begin with a lexicon of words with established prior polarities and identify the contextual polarity of phrases inwhich instances of those words appear in the cor pusto make the relationship between that task and ours clearer note that some word lists used to evaluate methods for recognizing prior polarity areincluded in our priorpolarity lexicon used for evaluation by turney and lists of manually identified pos itive and negative adjectives used for evaluation by hatzivassiloglou and mckeownsome research classifies the sentiments of sen tencesyu and hatzivassiloglou kim and hovy hu and liu and grefenstette et al4 all begin by first creating priorpolaritylexiconsyu and hatzivassiloglou then assign a sen timent to a sentence by averaging the prior semanticorientations of instances of lexicon words in the sentencethus they do not identify the contextual po larity of individual phrases containing clues as we 4in the units that are classified are fixed windows around named entities rather than sentences353 do in this paperkim and hovy hu and liu andgrefenstette et al multiply or count the prior po larities of clue instances in the sentencethey also consider local negation to reverse polarityhowever they do not use the other types of features in our experiments and they restrict their tags to positiveand negative in addition their systems assign one sen timent per sentence our system assigns contextual polarity to individual expressionsas seen abovesentences often contain more than one sentiment ex pressionnasukawa yi and colleagues classify the contextual polarity of sentiment expressions as we dothus their workis probably most closely related to oursthey clas sify expressions that are about specific items and use manually developed patterns to classify polaritythese patterns are highquality yielding quite highprecision but very low recalltheir system classifies a much smaller proportion of the sentiment ex pressions in a corpus than ours doesin this paper we present a new approach to phraselevel sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressionswith this approach we are able to automatically identify the contextual polarity for a large subset ofsentiment expressions achieving results that are sig nificantly better than baselinethis work was supported in part by the nsf under grant iis0208798 and by the advanced research and development activity
H05-1044
recognizing contextual polarity in phraselevel sentiment analysisthis paper presents a new approach to phraselevel sentiment analysis that first determines whether an expression is neutral or polar and then disambiguates the polarity of the polar expressionswith this approach the system is able to automatically identify the contextual polarity for a large subset of sentiment expressions achieving results that are significantly better than baselinewe propose supervised learning dividing the resources into prior polarity and context polarityour experiments indicate that lexiconlookup approaches to subjectivity analysis will have limited success on general textswe manually construct polarity lexicon in which each entry is annotated with its degree of subjectivity as well as its sentiment polarity our mpqa lexicon contains separate lexicons for subjectivity clues intensifiers and valence shifters which are used for identifying opinion roots modifiers and negation words
identifying sources of opinions with conditional random fields and extraction patterns recent systems have been developed forsentiment classification opinion recognition and opinion analysis we pursue another aspect of opinion analysis identi fying the sources of opinions emotions and sentiments we view this problem as an information extraction task and adopta hybrid approach that combines con ditional random fields and a variation of autoslog while crfs model source identification as a sequence tagging task autoslog learns extraction patterns our re sults show that the combination of these two methods performs better than either one alone the resulting system identifies opinion sources with 793 precision and 595 recall using a head noun matching measure and 812 precision and 606 recall using an overlap measure in recent years there has been a great deal of interest in methods for automatically identifying opin ions emotions and sentiments in textmuch of this research explores sentiment classification a text categorization task in which the goal is to classifya document as having positive or negative polar ity pang et al turney dave et al pang and leeother research efforts analyze opinion expressions at the sentence level or below to recog nize opinions their polarity and their strength pang and lee wilson et al yu and hatzivassiloglou wiebeand riloff many applications could benefit from these opinion analyzers including prod uct reputation tracking yi et al opinionoriented summarization and question answering yu and hatzivassiloglou we focus here on another aspect of opinion analysis automatically identifying the sources of the opinionsidentifying opinion sources willbe especially critical for opinionoriented questionanswering systems and opinionoriented summarization systems both of which need to distinguish the opinions of one source from those of another1 the goal of our research is to identify direct and indirect sources of opinions emotions sentiments and other private states that are expressed in textto illustrate the nature of this problem consider the examples below s1 taiwanborn voters favoring independence1in related work we investigate methods to identify the opinion expressions wiebe and riloff wilson et al and the nesting structure of sources the target of each opinion ie what the opinion is directed towards is currently being annotated manually for our corpus355 s2 according to the report the human rights record in china is horrendouss3 international officers believe that the eu will prevails4 international officers said us officials want the eu to prevailin s1 the phrase taiwanborn votersis the direct source of the favoringsen timentin s2 the reportis the direct source of the opinion about chinas human rights recordin s3 international officersare the direct source of an opinion regarding the euthe same phrase in s4 however denotes an indirect source of an opinion whose direct source is us officialsin this paper we view source identification as an information extraction task and tackle the problemusing sequence tagging and pattern matching tech niques simultaneouslyusing syntactic semantic and orthographic lexical features dependency parse features and opinion recognition features we train alinearchain conditional random field to identify opinion sourcesin ad dition we employ features based on automaticallylearned extraction patterns and perform feature in duction on the crf modelwe evaluate our hybrid approach using the nrrc corpus which is manually annotated with direct and indirect opinion source informationexperimental results show that thecrf model performs well and that both the extraction patterns and feature induction produce perfor mance gainsthe resulting system identifies opinionsources with 793 precision and 595 recall using a head noun matching measure and 812 pre cision and 606 recall using an overlap measurethe goal of information extraction systems is to extract information about events including the participants of the eventsthis task goes beyond named entity recognition because it requires the recognition of role relationshipsfor example an ie system that extracts information about corporate acquisitions must distinguish between the company that is doing the acquiring and the company that is being acquiredsim ilarly an ie system that extracts information about terrorism must distinguish between the person who is the perpetrator and the person who is the victimwe hypothesized that ie techniques would be well suited for source identification because an opinion statement can be viewed as a kind of speech event with the source as the agentwe investigate two very different learningbasedmethods from information extraction for the problem of opinion source identification graphical mod els and extraction pattern learningin particular we consider conditional random fields and a variation of autoslog crfs have been used successfully for named en tity recognition sarawagi and cohen and autoslog has performed well on information extraction tasks in sev eral domains while crfs treatsource identification as a sequence tagging task au toslog views the problem as a patternmatching task acquiring symbolic patterns that rely on both thesyntax and lexical semantics of a sentencewe hy pothesized that a combination of the two techniques would perform better than either one alonesection 3 describes the crf approach to identify ing opinion sources and the features that the systemusessection 4 then presents a new variation of au toslog autoslogse which generates ie patterns toextract sourcessection 5 describes the hybrid sys tem we encode the ie patterns as additional features in the crf modelfinally section 6 presents our experimental results and error analysisrandom fieldswe defined the problem of opinion source identification as a sequence tagging task via crfs as fol lowsgiven a sequence of tokens x x1x2xn we need to generate a sequence of tags or labels y y1y2ynwe define the set of possible labelvalues as s t where s is the first to ken of a source t is a noninitial token of a source and is a token that is not part of any source2 a detailed description of crfs can be found in2this is equivalent to the iob tagging scheme used in syn tactic chunkers 356 lafferty et al for our sequence tagging problem we create a linearchain crf based on an undirected graph g where v is the set of random variables y yi1 i n one for each of n tokens in an input sentence and e 1 complainedbecause it anchors the ex pressionsourceextr indicates whether a word is extracted by any source patternfor example in thesentence president jacques chirac frequently complained about frances economy the words president jacques and chiracwould all be ex tracted by the complainedpatterneach extraction pattern has frequency and prob ability values produced by autoslogse hence we create four ie patternbased features for each token xi sourcepattfreq sourceextrfreq sourcepattprob and sourceextrprob where the frequency values are divided into threeranges 0 1 2 and the probability values are di vided into five ranges of equal sizewe used the multiperspective question answering corpus4 for our experimentsthis corpus 4the mpqa corpus can be freely obtained at httpnrrcmitreorgnrrcpublicationshtmconsists of 535 documents that have been manually annotated with opinionrelated information in cluding direct and indirect sourceswe used 135 documents as a tuning set for model development and feature engineering and used the remaining 400 documents for evaluation performing 10fold crossvalidationthese texts are english language ver sions of articles that come from many countries and cover many topics5we evaluate performance using 3 measures over lap match head match and exact matchold is a lenient measure that considers an extraction to be correct if it overlaps with any of the an notated wordshm is a more conservative measure that considers an extraction to be correct if its head matches the head of the annotated sourcewe reportthese somewhat loose measures because the annota tors vary in where they place the exact boundaries of a sourcethem is the strictest measure that requires an exact match between the extracted words and the annotated wordswe use three evaluation metricsrecall precision and fmeasure with recall and pre cision equally weighted61 baselineswe developed three baseline systems to assess the difficulty of our taskbaseline1 labels as sources all phrases that belong to the semantic categories authority government human media organization or company proper nametable 1 shows that the precision is poor suggest ing that the third condition described in section 31 does play an important role in source identificationthe recall is much higher butstill limited due to sources that fall outside of the semantic categories or are not recognized as belong ing to these categoriesbaseline2 labels a noun phrase as a source if any of the following are true the np is the subject of a verb phrase containing an opinion word the np follows according to the np contains a possessive and is preceded byan opinion word or the np follows byand at taches to an opinion wordbaseline2s heuristicsare designed to address the first and the third condi tions in section 31table 1 shows that baseline2 is substantially better than baseline1baseline35this data was obtained from the foreign broadcast infor mation service a yous government agency359 recall prec f1 old 773 288 420 baseline1 hm 714 286 408 them 654 209 317 old 624 605 614 baseline2 hm 597 582 589 them 508 489 498 old 499 726 592 baseline3 hm 474 725 573 them 443 582 503 old 485 813 608 extraction patterns hm 469 785 587 them 419 702 525 crf old 561 810 663 basic features hm 551 792 650 them 500 724 592 crf old 591 824 689 basic ie pattern hm 581 805 675 features them 525 733 612 crffi old 577 807 673 basic features hm 568 788 660 them 517 724 603 crffi old 606 812 694 basic ie pattern hm 595 793 680 features them 541 727 620 table 1 source identification performance table labels a noun phrase as a source if it satisfies both baseline1 and baseline2s conditions as shown in table 1 the precision of this approach is the best of the three baselines but the recall is the lowest62 extraction pattern experimentwe evaluated the performance of the learned extrac tion patterns on the source identification taskthe learned patterns were applied to the test data and the extracted sources were scored against the manualannotations6 table 1 shows that the extraction pat terns produced lower recall than the baselines but with considerably higher precisionthese results show that the extraction patterns alone can identify 6these results were obtained using the patterns that had a probability 50 and frequency 1nearly half of the opinion sources with good accu racy63 crf experimentswe developed our crf model using the mallet code from mccallum for training we useda gaussian prior of 025 selected based on the tuning datawe evaluate the crf using the basic fea tures from section 3 both with and without the ie pattern features from section 5table 1 shows that the crf with basic features outperforms all of thebaselines as well as the extraction patterns achiev ing an fmeasure of 663 using the old measure 650 using the hm measure and 592 using theem measureadding the ie pattern features fur ther increases performance boosting recall by about3 points for all of the measures and slightly increas ing precision as wellcrf with feature inductionone limitation of loglinear function models like crfs is that they cannot form a decision boundary from conjunctionsof existing features unless conjunctions are explic itly given as part of the feature vectorfor the task of identifying opinion sources we observedthat the model could benefit from conjunctive fea turesfor instance instead of using two separatefeatures human and parentchunkincludes opinionexpression the conjunction of the two is more informativefor this reason we applied the crf feature in duction approach introduced by mccallum as shown in table 1 where crffi stands for thecrf model with feature induction we see consistent improvements by automatically generating conjunctive featuresthe final system which com bines the basic features the ie pattern features and feature induction achieves an fmeasure of 694 for the old measure an fmeasure of 680 for the hm measure and an fmeasure of 620 for the them measure64 error analysisan analysis of the errors indicated some common mistakes some errors resulted from error propagation in 360our subsystemserrors from the sentence bound ary detector in gate were especially problematic because they causedthe collins parser to fail resulting in no depen dency tree informationsome errors were due to complex and unusualsentence structure which our rather simple fea ture encoding for crf could not capture wellsome errors were due to the limited coverage of the opinion lexiconwe failed to recognize some cases when idiomatic or vague expressions were used to express opinionsbelow are some examples of errors that we foundinterestingdoubly underlined phrases indicate in correctly extracted sources opinion words are singly underlinedfalse positives actually these three countries do have one common denominator ie that their values and policies do not agree with those of the united states and none of them are on good terms with the united states perhaps this is why fidel castro has not spoken out against what might go on in guantanamoin their values and policiesseems like a rea sonable phrase to extract but the annotation does notmark this as a source perhaps because it is some what abstractin spoken outis negated which means that the verb phrase does not bear an opinion but our system failed to recognize the negationfalse negatives and for this reason too they have a moral duty to speak out as swedish foreign minister anna lindh among others did yesterday in particular iran and iraq are at loggerheads with each other to this dayexample involves a complex sentence structure that our system could not deal with involves an uncommon opinion expression that our system did not recognizeto our knowledge our research is the first to auto matically identify opinion sources using the mpqaopinion annotation schemethe most closely re lated work on opinion analysis is bethard et al who use machine learning techniques to identify propositional opinions and their holders however their work is more limited in scope than ours in several waystheir work only addresses propositional opinions which arelocalized in the propositional argumentof certain verbs such as believeor realizein con trast our work aims to find sources for all opinions emotions and sentiments including those that are not related to a verb at allfurthermore berthardet als task definition only requires the identifica tion of direct sources while our task requires the identification of both direct and indirect sourcesbethard et al evaluate their system on manuallyannotated framenet and prop bank sentences and achieve 48 recall with 57 precisionour ie pattern learner can be viewed as a crossbetween autoslog and autoslog ts autoslog is a supervised learner that requires annotated training data but does notcompute statisticsautoslogts is a weakly super vised learner that does not require annotated databut generates coarse statistics that measure each patterns correlation with relevant and irrelevant docu mentsconsequently the patterns learned by bothautoslog and autoslogts need to be manually re viewed by a person to achieve good accuracyin contrast our ie learner autoslogse computes statistics directly from the annotated training data creating a fully automatic variation of autoslogwe have described a hybrid approach to the problem of extracting sources of opinions in textwe cast this problem as an information extraction task using both crfs and extraction patternsour research is the first to identify both direct and indirect sources for all types of opinions emotions and sentimentsdirections for future work include trying to in crease recall by identifying relationships between opinions and sources that cross sentence boundariesand relationships between multiple opinion expres sions by the same sourcefor example the fact that a coreferring noun phrase was marked as a source in one sentence could be a useful clue for extracting the source from another sentencethe probability or the strength of an opinion expression may also play a useful role in encouraging or suppressing source extraction361we thank the reviewers for their many helpful com ments and the cornell nlp group for their advice and suggestions for improvementthis work wassupported by the advanced research and develop ment activity by nsf grants iis0208028 and iis0208985 and by the xerox foundation
H05-1045
identifying sources of opinions with conditional random fields and extraction patternsrecent systems have been developed for sentiment classification opinion recognition and opinion analysis we pursue another aspect of opinion analysis identifying the sources of opinions emotions and sentimentswe view this problem as an information extraction task and adopt a hybrid approach that combines conditional random fields and a variation of autoslog while crfs model source identification as a sequence tagging task autoslog learns extraction patternsour results show that the combination of these two methods performs better than either one alonethe resulting system identifies opinion sources with 793 precision and 595 recall using a head noun matching measure and 812 precision and 606 recall using an overlap measure
domainspecific sense distributions and predominant sense acquisition distributions of the senses of words are often highly skewed this fact is exploitedby word sense disambiguation sys tems which back off to the predominant sense of a word when contextual clues arenot strong enough the domain of a doc ument has a strong influence on the sensedistribution of words but it is not feasi ble to produce large manually annotated corpora for every domain of interest in this paper we describe the construction of three sense annotated corpora in different domains for a sample of english wordswe apply an existing method for acquiring predominant sense information automatically from raw text and for our sam ple demonstrate that acquiring suchinformation automatically from a mixeddomain corpus is more accurate than de riving it from semcor and acquiringit automatically from text in the same do main as the target domain performs best by a large margin we also show that for an all words wsd task this automatic method is best focussed on words that are salient to the domain and on words with a different acquired predominant sense in that domain compared to that acquired from a balanced corpus from analysis of manually sense tagged corpora kilgarriff has demonstrated that distributions of the senses of words are often highly skewedmost researchers working on word sense disambiguation use manually sense tagged data such as semcor to train statistical classi fiers but also use the information in semcor on theoverall sense distribution for each word as a back off modelin wsd the heuristic of just choosing themost frequent sense of a word is very powerful especially for words with highly skewed sense distri butions indeed only 5 out of the 26 systems in the recent senseval3 english all words task outperformed the heuristic of choosing the most fre quent sense as derived from semcor furthermore sys tems that did outperform the first sense heuristic did so only by a small margin over a decade ago gale et al observed the tendency for one sense of a word to prevail in a given discourseto take advantage of this a method for automatically determining the one sensegiven a discourse or document is requiredmagnini et al have shown that information about the do main of a document is very useful for wsdthis isbecause many concepts are specific to particular domains and for many words their most likely mean ing in context is strongly correlated to the domain of the document they appear inthus since word sense distributions are skewed and depend on the domain at hand we would like to know for each domain of application the most likely sense of a wordhowever there are no extant domainspecificsense tagged corpora to derive such sense distribution information fromproducing them would be ex tremely costly since a substantial corpus would have to be annotated by hand for every domain of interestin response to this problem mccarthy et al proposed a method for automatically inducing the1this figure is the mean of two different estimates the difference being due to multiword handling419 predominant sense of a word from raw textthey carried out a limited test of their method on text in two domains using subject field codes to assess whether the acquired pre dominant sense information was broadly consistent with the domain of the text it was acquired frombut they did not evaluate their method on hand tagged domainspecific corpora since there was no such data publicly availablein this paper we evaluate the method on domainspecific text by creating a senseannotated gold standard2 for a sample of wordswe used a lexical sam ple because the cost of hand tagging several corpora for an allwords task would be prohibitivewe show that the sense distributions of words in this lexical sample differ depending on domainwe also showthat sense distributions are more skewed in domain specific textusing mccarthy et als method weautomatically acquire predominant sense informa tion for the lexical sample from the corpora and evaluate the accuracy of this and predominant sense information derived from semcorwe show that in our domains and for these words first sense information automatically acquired from a general corpus is more accurate than first senses derived from semcorwe also show that deriving first senseinformation from text in the same domain as the tar get data performs best particularly when focusing on words which are salient to that domainthe paper is structured as followsin section 2 we summarise mccarthy et als predominant sense methodwe then describe the new gold standard corpora and evaluate predominant sense accuracy we discuss the results with a proposal for applying the method to an allwords task and an analysis of our results in terms of this proposal before concluding with future directionswe use the method described in mccarthy et al for finding predominant senses from raw textthe method uses a thesaurus obtained from the text by parsing extracting grammatical relations and then listing each word with its top nearest neighbours where is a constantlike mccarthy 2this resource will be made publicly available for research purposes in the near futureet al we use and obtain our thesaurus using the distributional similarity metric described by lin we use wordnet as our sense inventorythe senses of a word are each assigned a ranking score which sums over the distributional similarity scores of the neighbours and weights eachneighbours score by a wn similarity score between the sense of and the sense of the neighbour that maximises the wn similarity scorethis weight is normalised by the sum of such wn similarity scores between all senses of and and the senses of the neighbour that maximises this scorewe use the wn similarity jcnscore since this gave rea sonable results for mccarthy et al and it is efficientat run time given precompilation of frequency informationthe jcn measure needs word frequency information which we obtained from the british national corpus the distributional thesaurus was constructed using subject direct object adjective modifier and noun modifier re lationsin our experiments we compare for a sampleof nouns the sense rankings created from a bal anced corpus with rankings created from domainspecific corpora extracted from the reuters corpus in more detail the three corpora are bnc the writtendocuments amounting to 3209 documents and covering a wide range of topic domainsfinance 117734 finance documents topic codes ecat and mcat sports 35317 sports documents topic code gspo we computed thesauruses for each of these corpora using the procedure outlined in section 231 word selectionin our experiments we used finance and sports domainsto ensure that a significant number of the chosen words are relevant for these domains we did not choose the words for our experiments completely randomlythe first selection criterionwe applied used the subject field code re 420 source which assignsdomain labels to synsets in wn version 16we se lected all the polysemous nouns in wn 16 that have at least one synset labelled sport and one synset labelled financethis reduced the set of words to 38however some of these words were fairly obscure did not occur frequently enough in one of the domain corpora or were simply too polysemouswe narrowed down the set of words using the crite ria frequency in the bnc 1000 at most12 senses and at least 75 examples in each corpusfinally a couple of words were removed because the domainspecific sense was particularly ob scure3the resulting set consists of 17 words4 clubmanager record right bill check competition conversion crew delivery division fishing reserve re turn score receiver running we refer to this set of words as fs cdsthe first four words occur in the bnc with high frequency the last two with low frequency and the rest are midfrequencythree further sets of words were selected on the basis of domain saliencewe chose eight words that are particularly salient in the sport corpus eight in the finance corpus and seven that had equal salience in both we computed salience as a ratio of normalised document frequencies using the formula fifffl fffl where ffi is the number of documents in domain containing the noun ffiis the number of documents in domain ffi is the total number of documents containing the noun and ffi is the total number of documentsto obtain the sets s sal f sal and eq sal we gen erated the 50 most salient words for both domainsand 50 words that were equally salient for both do mainsthese lists of 50 words were subjected to the same constraints as set fs cds that is occurring in the bnc 1000 having at most 12 senses and having at least 75 examples in each corpusfrom the remaining words we randomly sampled 8 words 3for example the finance sense of eagle is very unlikely to be found4one more word pitch was in the original selectionhowever we did not obtain enough usable annotated sentences for this particular word and therefore it was discardedfrom the sport salience list and finance list and 7 from the salience list for words with equal salience in both domainsthe resulting sets of words are s sal fan star transfer striker goal title tie coach f sal package chip bond market strike bank share target eq sal will phase half top performance level country the average degree of polysemy for this set of 40 nouns in wn is 6632 the annotation taskfor the annotation task we recruited linguistics stu dents from two universitiesall ten annotators are native speakers of englishwe set up annotation as an open mind word ex pert task5open mind is a web based system for annotating sentencesthe user can choose a word from a pull down menuwhen a word is selected the user is presented with a list of sense definitionsthe sense definitions were taken from wn171 andpresented in random orderbelow the sense defini tions sentences with the target word are givenleft of the sentence on the screen there are as many tickboxes as there are senses for the word plus boxes for unclearand unlistedsensethe annotator is expected to first read the sense defi nitions carefully and then after reading the sentence decide which sense is best for the instance of the word in a particular sentenceonly the sentence inwhich the word appears is presented in case the sentence does notgive enough evidence to decide the annotator is ex pected to check the unclearboxwhen the correct sense is not listed the annotator should check the unlistedsenseboxthe sentences to be annotated were randomly sampled from the corporathe corpora were first part of speech tagged and lemmatised using rasp up to 125 sentences were randomly selected for each word from eachcorpussentences with clear problems were removedthe first 100 remaining sentences were selected for the taskfor a few 5httpwwwteachcomputersorgwordexpertenglish 421words there were not exactly 100 sentences per cor pus availablethe reuters corpus contains quite a few duplicate documentsno attempts were made to remove duplicates33 characterisation of the annotated datamost of the sentences were annotated by at least three peoplesome sentences were only done by two annotatorsthe complete set of data comprises 33225 tagging actsthe interannotator agreement on the complete set of data was 656for the bnc data it was 60 for the sports data 65 and for the finance data 69this is lower than reported for other sets of anno tated data but quite close to the reported 628 agreement between the first two taggings for single noun tagging for the senseval3 english lexical sample task the fairest comparison is probably be tween the latter and the interannotator agreement for the bnc datareasons why our agreement is relatively low include the fact that almost all of the sentences are annotated by three people and also the high degree of polysemy of this set of wordsproblematic cases the unlisted category was used as a miscellaneous categoryin some cases a sense was truly missing from the inventory in other cases we had not recognised thatthe word was really part of a multiword finally there were a num ber of cases where the word had been assigned the wrong part of speech tag we identified and removed all these systematic problem cases from theunlisted sensesafter removing the problematic un listed cases we had between 09 and 45 unlisted instances leftwe also had between 18 and 48 unclearinstancesthe percentage of unlisted instances re flects the fit of wn to the data whilst that of unclear cases reflects the generality of the corpus6to compute interannotator agreement we used amruta pu randare and ted pedersens omtosval2 package version 001the sense distributions wsd accuracy is strongly related to the entropy of the sense distribution of the target word the more skewed the sense dis tribution is towards a small percentage of the senses the lower the entropyaccuracy is related to this because there is more data shared between fewer of the senseswhen the first sense is very predominant it is hard for any wsd system to beat the heuristic of always selecting that sense the sense distribution for a given word may varydepending on the domain of the text being processedin some cases this may result in a differ ent predominant sense other characteristics of the sense distribution may also differ such as entropy ofthe sense distribution and the dominance of the pre dominant sensein table 1 we show the entropy per word in our sample and relative frequency ofits first sense for each of our three gold stan dard annotated corporawe compute the entropy ofa words sense distribution as a fraction of the pos sible entropy 021fi3547698269869 where badc 69826986fe g h2ikjfl e mg this measure reduces the impact of the number of senses of a word and focuses on the uncertainty within thedistributionfor each corpus we also show the av erage entropy and average relative frequency of the first sense over all wordsfrom table 1 we can see that for the vast ma jority of words the entropy is highest in the bnchowever there are exceptions return fan and ti tle for finance and return half level running strike and share for sportssurprisingly eq sal words which are not particularly salient in eitherdomain also typically have lower entropy in the domain specific corpora compared to the bncpre sumably this is simply because of this small set ofwords which seem particularly skewed to the fi nancial domainnote that whilst the distributionsin the domainspecific corpora are more skewed to wards a predominant sense only 7 of the 40 words in the finance corpus and 5 of the 40 words in the sports corpus have only one sense attestedthus even in domainspecific corpora ambiguity is 422 training testing bnc finance sports bnc 407 433 332 finance 391 499 240 sports 257 197 437 random bl 198 196 194 semcor fs 320 339 163 table 2 wsd using predominant senses training and testing on all domain combinationsstill present even though it is less than for general textwe show the sense number of the first sense alongside the relative frequency of that sensewe use uclfor unclear and unlfor unlisted senses where these are predominant in our annotated dataalthough the predominant sense of a word is not al ways the domainspecific sense in a domainspecific corpus the domainspecific senses typically occurmore than they do in nonrelevant corporafor ex ample sense 11 of return was notthe first sense in sports however it did have a rel ative frequency of 19 in that corpus and was absent from bnc and financewe have run the predominant sense finding algo rithm on the raw text of each of the three corporain turn we evaluate the accuracy of performingwsd purely with the predominant sense heuristic us ing all 9 combinations of training and test corporathe results are presented in table 2the random baseline is adcon mp 826 q 4769826986m a we also give theaccuracy using a first sense heuristic from semcor the precision is given alongside inbrackets because a predominant sense is not sup plied by semcor for every word7 the automatic method proposes a predominant sense in every casethe best results are obtained when training on a domain relevant corpusin all cases when training on appropriate training data the automatic methodfor finding predominant senses beats both the ran dom baseline and the baseline provided by semcortable 3 compares wsd accuracy using the auto matically acquired first sense on the 4 categories of 7there is one such word in our sample strikertest train fs cds f sal s sal eq sal bncappr 333 515 397 480 bncsc 283 440 246 362 financeappr 370 702 385 701 financesc 303 511 229 335 sportsappr 426 181 657 469 sportssc 94 381 132 122table 3 wsd using predominant senses with train ing data from the same domain or from semcorwords fs cds f sal s sal and eq sal separatelyresults using the training data from the appropriate domain are indicated with apprand contrasted with the results using semcor data indicated with sc8we see that for words which are pertinent to the do main of the test text it pays to use domain specific training datain some other cases eg f sal tested on sports it is better to use semcor datafor the eq sal words accuracy is highest when financedata is used for training reflecting their bias to fi nancial senses as noted in section 33we are not aware of any other domainspecific man ually sense tagged corporawe have created sensetagged corpora from two specific domains for a sam ple of words and a similar resource from a balanced corpus which covers a wide range of domainswehave used these resources to do a quantitative evaluation which demonstrates that automatic acquisi tion of predominant senses outperforms the semcor baseline for this sample of wordsthe domainspecific manually sense tagged resource is an interesting source of information in it selfit shows for example that the predominant sense is much more dominant in a specific domain than it is in the general case even for words which are notparticularly salient in that domainsimilar obser vations can be made about the average number ofencountered senses and the skew of the sense distributionsit also shows that although the predom inant sense is more dominant and domainspecific 8for semcor precision figures for the s sal words are up to 4 higher than the accuracy figures given however they are still lower than accuracy using the domain specific corpora we leave them out due to lack of space423 senses are used more within a specific domainthere is still a need for taking local context into account when disambiguating wordsthe predomi nant sense heuristic is hard to beat for some wordswithin a domain but others remain highly ambiguous even within a specific domainthe return ex ample in section 33 illustrates thisour results are for a lexical sample because we did not have the resources to produce manually tagged domainspecific corpora for an all words taskalthough sense distribution data derived fromsemcor can be more accurate than such informa tion derived automatically in a given domain there will be words for whichthe semcor frequency distributions are inappropriate or unavailablethe work presented here demonstrates that the automatic method for finding pre dominant senses outperforms semcor on a sampleof words particularly on ones that are salient to a do mainas well as domainsalient words there will be words which are not particularly salient but still havedifferent distributions than in semcorwe therefore propose that automatic methods for determin ing the first sense should be used when either there is no manually tagged data or the manually taggeddata seems to be inappropriate for the word and do main under considerationwhile it is trivial to findthe words which are absent or infrequent in train ing data such as semcor it is less obvious how to find words where the training data is not appropriateone way of finding these words would be to look for differences in the automatic sense rankings of words in domain specific corpora compared to those of the same words in balanced corpora such as the bncwe assume that the sense rankings from a balancedtext will more or less correlate with a balanced resource such as semcorof course there will be dif ferences in the corpus data but these will be less radical than those between semcor and a domain specific corpusthen the automatic ranking methodshould be applied in cases where there is a clear deviation in the ranking induced from the domain specific corpus compared to that from the balanced cor pusotherwise semcor is probably more reliable if data for the given word is availablethere are several possibilities for the definition ofclear deviationaboveone could look at differences in the ranking over all words using a mea training testing finance sportsfinance 355 sports 409 semcor 142 100 table 4 wsd accuracy for words with a different first sense to the bncsure such as pairwise agreement of rankings or a ranking correlation coefficient such as spearmansone could also use the rankings to estimate prob ability distributions and compare the distributions with measures such as alphaskew divergence a simple definition would be where the rankings assign different predominant senses to a wordtaking this simple definition of deviation we demonstrate how this might be done for our corporawe compared the automatic rankings from the bnc with those from each domain specific corpus for all polysemous nouns in semcoralthough the majority are assigned thesame first sense in the bnc as in the domain spe cific corpora a significant proportion are notfor all words wsd in either of these domains it would be these wordsfor which automatic ranking should be usedta ble 4 shows the wsd accuracy using this approach for the words in our lexical sample with a differentautomatically computed first sense in the bnc com pared to the target domain we trained on the appropriate domain for each test corpus and compared this with using semcor first sense datathe results show clearly that using this approach to decide whether to use automatic sense rankings performs much better than always using semcor rankingsthe method for automatically finding the predominant sense beat semcor consistently in our experimentsso for some words it pays to obtain auto matic information on frequency distributions from appropriate corporaour sense annotated corpora exhibit higher entropy for word sense distributions for domainspecific text even for words which are not specific to that domainthey also show that different senses predominate in different domains 424 and that dominance of the first sense varies to a great extent depending on the wordprevious workin all words wsd has indicated that techniques us ing handtagged resources outperform unsupervisedmethodshowever we demonstrate that it is possi ble to apply a fully automatic method to a subset ofpertinent words to improve wsd accuracythe au tomatic method seems to lead to better performance for words that are salient to a domainthere are alsoother words which though not particularly domainsalient have a different sense distribution to that an ticipated for a balanced corpuswe propose that inorder to tackle an all words task automatic methods should be applied to words which have a sub stantial difference in sense ranking compared to that obtained from a balanced corpuswe demonstrate that for a set of words which meet this conditionthe performance of the automatic method is far bet ter than when using data from semcorwe will dofurther work to ascertain the best method for quanti fying substantial changewe also intend to exploit the automatic rankingto obtain information on sense frequency distribu tions given the genre as well as the domain of the textwe plan to combine this with local context using collocates of neighbours in the thesaurus for contextual wsdacknowledgements we would like to thank siddharth patwardhan and ted pedersen for making the wn similarity package available rada mihalceaand tim chklovski for making the open mind software avail able to us and julie weeds for the thesaurus softwarethe work was funded by eu200134460 project meaning uk epsrc project ranking word sense for word sense disambiguationand the uk royal society
H05-1053
domainspecific sense distributions and predominant sense acquisitiondistributions of the senses of words are often highly skewedthis fact is exploited by word sense disambiguation systems which back off to the predominant sense of a word when contextual clues are not strong enoughthe domain of a document has a strong influence on the sense distribution of words but it is not feasible to produce large manually annotated corpora for every domain of interestin this paper we describe the construction of three sense annotated corpora in different domains for a sample of english wordswe apply an existing method for acquiring predominant sense information automatically from raw text and for our sample demonstrate that acquiring such information automatically from a mixeddomain corpus is more accurate than deriving it from semcor and acquiring it automatically from text in the same domain as the target domain performs best by a large marginwe also show that for an all words wsd task this automatic method is best focussed on words that are salient to the domain and on words with a different acquired predominant sense in that domain compared to that acquired from a balanced corpusour dataset is made up of 3 collections of documents a domainneutral corpus and two domainspecific corpora
bidirectional inference with the easiestfirst strategy for tagging sequence data this paper presents a bidirectional inference algorithm for sequence labeling problems such as partofspeech tag ging named entity recognition and text chunking the algorithm can enumerate all possible decomposition structures andfind the highest probability sequence together with the corresponding decomposi tion structure in polynomial time we also present an efficient decoding algorithm based on the easiestfirst strategy which gives comparably good performance tofull bidirectional inference with significantly lower computational cost exper imental results of partofspeech tagging and text chunking show that the proposedbidirectional inference methods consis tently outperform unidirectional inference methods and bidirectional memms give comparable performance to that achievedby stateoftheart learning algorithms in cluding kernel support vector machines the task of labeling sequence data such as partof speech tagging chunking and named entity recognition is one of the most i am portant tasks in natural language processingconditional random fields have recently attracted much attention because they are free from socalled label bias prob lems which reportedly degrade the performance of sequential classification approaches like maximum entropy markov models although sequential classification approachescould suffer from label bias problems they have sev eral advantages over crfsone is the efficiencyof trainingcrfs need to perform dynamic programming over the whole sentence in order to compute feature expectations in each iteration of numerical optimizationtraining for instance second order crfs using a rich set of features can require prohibitive computational resourcesmaxmarginmethods for structured data share problems of com putational cost another advantage is that one can employ a variety of machine learning algorithms as the local classifierthere is huge amount of work about developing classification algorithms that have high generalization performance in the machine learning communitybeing able to incorporate such stateoftheart machine learning algorithms is importantindeed sequential classification approaches with kernel support vector machines offer competitive per formance in pos tagging and chunking one obvious way to improve the performance of sequential classification approaches is to enrich theinformation that the local classifiers can usein stan dard decomposition techniques the local classifiers cannot use the information about future tags which would be helpful in predicting the tag of the targetwordto make use of the information about future tags toutanova et al proposed a tagging algo rithm based on bidirectional dependency networks 467 and achieved the best ac curacy on pos tagging on the wall street journal corpusas they pointed out in their paper howevertheir method potentially suffers from collusionef fects which make the model lock onto conditionally consistent but jointly unlikely sequencesin theirmodeling the local classifiers can always use the in formation about future tags but that could cause a doublecounting effect of tag informationin this paper we propose an alternative way of making use of future tagsour inference method considers all possible ways of decomposition andchooses the bestdecomposition so the informa tion about future tags is used only in appropriate situationswe also present a deterministic versionof the inference method and show their effective ness with experiments of english pos tagging and chunking using standard evaluation setsthe task of labeling sequence data is to find the se quence of tags t1tn that maximizes the following probability given the observation o o1on p observations are typically words and their lexicalfeatures in the task of pos taggingsequential clas sification approaches decompose the probability as follows p ni1 p this is the lefttoright decompositionif we make a firstorder markov assumption the equation becomes p ni1 p then we can employ a probabilistic classifier trained with the preceding tag and observations in order to obtain p for local classificationa common choice for the local probabilistic classifier is maximum entropy classifiers the best tag sequence can be efficiently computed by using a viterbi decoding algorithm in polynomial timet1 t2 t3 o t1 t2 t3 t1 t2 t3 t1 t2 t3 o o o figure 1 different structures for decompositionthe righttoleft decomposition is p ni1 p these two ways of decomposition are widely usedin various tagging problems in natural language pro cessingthe issue with such decompositions is that you have only the information about the preceding tags when performing local classifi cationfrom the viewpoint of local classification we want to give the classifier as much information as possible because the information about neighboring tags is useful in generalas an example consider the situation where we are going to annotate a threeword sentence withpartofspeech tagsfigure 1 shows the four possi ble ways of decompositionthey correspond to the following equations p p p p p p p p p p p p p p p p and are the standard lefttoright and righttoleft decompositionsnotice that in decomposi tion the local classifier can use the information about the tags on both sides when deciding t2if for example the second word is difficult to tag we might as well take the de composition structure because the local classifier 468 can use rich information when deciding the tag of the most difficult wordin general if we have annword sentence and adopt a firstorder markov assumption we have 2n1 possible ways of decomposition because each of the n 1 edges in the cor responding graph has two directions our bidirectional inference method is to consider all possible decomposition structures and choose the beststructure and tag sequencewe will show inthe next section that this is actually possible in poly nomial time by dynamic programmingas for the training let us look at the equa tions of four different decompositions aboveyou can notice that there are only four types of local conditional probabilities p p p and p this means that if we have these four types of lo cal classifiers we can consider any decompositionstructures in the decoding stagethese local classi fiers can be obtained by training with corresponding neighboring tag informationtraining the first twotypes of classifiers is exactly the same as the training of popular lefttoright and righttoleft sequen tial classification models respectivelyif we take a secondorder markov assumption we need to train 16 types of local classifiers because each of the four neighboring tags of a classificationtarget has two possibilities of availabilityin gen eral if we take a kth order markov assumption we need to train 22k types of local classifies21 polynomial time inferencethis section describes an algorithm to find the de composition structure and tag sequence that give the highest probabilitythe algorithm for the firstorder case is an adaptation of the algorithm for decodingthe best sequence on a bidirectional dependency net work introduced by which originates from the viterbi decoding algorithm for secondorder markov modelsfigure 2 shows a polynomial time decoding algorithm for our bidirectional inferenceit enumer ates all possible decomposition structures and tag sequences by recursive function calls and finds the highest probability sequencepolynomial time isachieved by cachingnote that for each local clas sification the function chooses the appropriate local function bestscore return bestscoresub function bestscoresub memorization if return cache left boundary case if if return 1 else return 0 recursive case p localclassification return maxdi2 maxti2 pbestscoresub function localclassification if return p if return p if return p if return p figure 2 pseudocode for bidirectional inference for the firstorder conditional markov modelsdi is the direction of the edge between ti and ti1classifier by taking into account the directions of the adjacent edges of the classification targetthe secondorder case is similar but slightly morecomplexfigure 3 shows the algorithmthe recur sive function needs to consider the directions of the four adjacent edges of the classification target and maintain the directions of the two neighboring edgesto enumerate all possible edge directionsin addi tion the algorithm rules out cycles in the structure22 decoding with the easiestfirst strategywe presented a polynomial time decoding algorithm in the previous sectionhowever polynomial time is not low enough in practiceindeed even the viterbi decoding of secondorder markov models for pos tagging is not practical unless some pruning methodis involvedthe computational cost of the bidirec tional decoding algorithm presented in the previoussection is of course larger than that because it enu merates all possible directions of the edges on top of the enumeration of possible tag sequencesin this section we present a greedy version of the decoding method for bidirectional inference which 469 function bestscore return bestscoresub function bestscoresub to avoid cycles if return 0 memorization if return cache left boundary case if if return 1 else return 0 recursive case p localclassification return maxdi2 maxdi3 maxti3 pbestscoresub figure 3 pseudocode for bidirectional inference for the secondorder conditional markov modelsdi is the direction of the edge between ti and ti1di is the direction of the edge between ti1 and ti1we omit the localclassification function because it is the obvious extension of that for the firstorder caseis extremely simple and significantly more efficient than full bidirectional decodinginstead of enumerating all possible decomposition structures the algorithm determines the struc ture by adopting the easiestfirst strategythe whole decoding algorithm is given below1find the easiestword to tag2tag the wordwe assume in this paper that the easiestword to tag is the word for which the classifier outputs the highest probabilityin finding the easiest word we use the appropriate local classifier according to the availability of the neighboring tagsthereforein the first iteration we always use the local classi fiers trained with no contextual tag information then for example if t3 has been tagged in the first iteration in a threeword sentence we use p to compute the probability for tagging t2 in the second iteration a naive implementation of this algorithm requires o invocations of local classifiers where n is the number of the words in the sentence because we need to update the probabilities over the words ateach iterationhowever a kth order markov as sumption obviously allows us to skip most of the probability updates resulting in o invocations of local classifiersthis enables us to build a very efficient taggerfor local classifiers we used a maximum entropy model which is a common choice for incorporating various types of features for classification problems in natural language processing regularization is important in maximum entropy modeling to avoid overfitting to the training datafor this purpose we use the maximum entropy modeling with inequality constraints the model gives equally good per formance as the maximum entropy modeling with gaussian priors and the size of the resulting model is much smaller thanthat of gaussian priors because most of the param eters become zerothis characteristic enables us to easily handle the model data and carry out quick decoding which is convenient when we repetitivelyperform experimentsthis modeling has one param eter to tune which is called the width factorwe tuned this parameter using the development data in each type of experiments470 current word wi ti previous word wi1 ti next word wi1 ti bigram features wi1 wi ti wi wi1 ti previous tag ti1 ti tag two back ti2 ti next tag ti1 ti tag two ahead ti2 ti tag bigrams ti2 ti1 ti ti1 ti1 ti ti1 ti2 ti tag trigrams ti2 ti1 ti1 ti ti1 ti1 ti2 ti tag 4grams ti2 ti1 ti1 ti2 ti tagword ti1 wi ti combination ti1 wi ti ti1 ti1 wi ti prefix features prefixes of wi ti suffix features suffixes of wi ti lexical features whether wi has a hyphen ti whether wi has a number ti whether wi has a capital letter ti whether wi is all capital titable 1 feature templates used in pos tagging ex perimentstags are partsofspeechtag featuresare not necessarily used in all the modelsfor example next tagfeatures cannot be used in leftto right modelsto evaluate the bidirectional inference methods pre sented in the previous sections we ran experimentson pos tagging and text chunking with standard en glish data setsalthough achieving the best accuracy is not the primary purpose of this paper we explored usefulfeature sets and parameter setting by using develop ment data in order to make the experiments realistic41 partofspeech tagging experimentswe split the penn treebank corpus into training development and test sets as insections 018 are used as the train ing setsections 1921 are the development set andsections 2224 are used as the test setall the ex periments were carried out on the development set except for the final accuracy report using the best settingfor features we basically adopted the feature set method accuracy speed lefttoright 9692 844 righttoleft 9689 902 dependency networks 9706 1446 easiestlast 9658 2360 easiestfirst 9713 2461 full bidirectional 9712 34table 2 pos tagging accuracy and speed on the de velopment setmethod accuracy depnetworks 9724 perceptron 9711 svm 9705 hmm 9648 easiestfirst 9710 full bidirectional 9715table 3 pos tagging accuracy on the test set provided by except for com plex features such as crude companyname detectionfeatures because they are specific to the penn tree bank and we could not find the exact implementation detailstable 1 lists the feature templates used in our experimentswe tested the proposed bidirectional methodsconventional unidirectional methods and the bidirec tional dependency network proposed by toutanova for comparison1all the models are secondordertable 2 shows the accuracy and tagging speed on the developmentdata 2bidirectional inference methods clearly out performed unidirectional methodsnote that the easiestfirst decoding method achieves equally good performance as full bidirectional inferencetable 2 also shows that the easiestlast strategy where weselect and tag the most difficult word at each itera tion is clearly a bad strategyan example of easiestfirst decoding is given be low 1for dependency network and full bidirectional decoding we conducted pruning because the computational cost was too large to perform exhaustive searchwe pruned a tag candidate if the zeroth order probability of the candidate p was lower than one hundredth of the zeroth order probability of the most likely tag at the token2tagging speed was measured on a server with an amd opteron 24ghz cpu471 thedt4 companynn7 hadvbd11soughtvbn14 increasesnns13 total ingvbg12 2 803cd5 millioncd8 1 orcc6 22cd9 nn10 3 each token represents wordposdecodingordertypically punctuations and articles are tagged firstverbs are usually tagged in later stages because their tags are likely to be ambiguouswe applied our bidirectional inference methods to the test datathe results are shown in table 3the table also summarizes the accuracies achieved by several other research effortsthe best accuracyis 9724 achieved by bidirectional dependency net works with a richer set of features that are carefully designed for the corpusa perceptron algorithm gives 9711 gimenez and marquez achieve 9705 with support vector machines this result indicates thatbidirectional inference with maximum entropy mod eling can achieve comparable performance to other stateoftheart pos tagging methods42 chunking experimentsthe task of chunking is to find nonrecursive phrases in a sentencefor example a chunker segments the sentence he reckons the current account deficit willnarrow to only 18 billion in septemberinto the fol lowing np he vp reckons np the current accountdeficit vp will narrow pp to np only 18 bil lion pp in np september we can regard chunking as a tagging task by con verting chunks into tags on tokensthere are severalways of representing text chunks we tested the startend representation in addition to the popular iob2 representation since local classifiers can have finegrained informationon the neighboring tags in the startend represen tationfor training and testing we used the data set pro vided for the conll2000 shared taskthe training set consists of section 1518 of the wsj corpus and the test set is section 20in addition we made the development set from section 21 3we basically adopted the feature set provided in 3we used the perl script provided on httpilkkubnlsabinechunklink current word wi ti previous word wi1 ti word two back wi2 ti next word wi1 ti word two ahead wi2 ti bigram features wi2 wi1 ti wi1 wi ti wi wi1 ti wi1 wi2 ti current pos pi ti previous pos pi1 ti pos two back pi2 ti next pos pi1 ti pos two ahead pi2 ti bigram pos features pi2 pi1 ti pi1 pi ti pi pi1 ti pi1 pi2 ti trigram pos features pi2 pi1 pi ti pi1 pi pi1 ti pi pi1 pi2 ti previous tag ti1 ti tag two back ti2 ti next tag ti1 ti tag two ahead ti2 ti bigram tag features ti2 ti1 ti ti1 ti1 ti ti1 ti2 titable 4 feature templates used in chunking experi ments and used postrigrams as wellta ble 4 lists the features used in chunking experimentstable 5 shows the results on the development setagain bidirectional methods exhibit better perfor mance than unidirectional methodsthe differenceis bigger with the startend representationdepen dency networks did not work well for this chunking task especially with the startend representationwe applied the best model on the development set in each chunk representation type to the test datatable 6 summarizes the performance on thetest setour bidirectional methods achieved f scores of 9363 and 9370 which are better than the best fscore of the conll2000 shared task and comparable to those achieved by other stateoftheart methodsthere are some reports that one can improve the performance of unidirectional models by combiningoutputs of multiple taggersshen et al re ported a 49 error reduction of supertagging by 472 representation method order recall precision fscore speed iob2 lefttoright 1 9317 9305 9311 1775 2 9313 9290 9301 989 righttoleft 1 9292 9282 9287 1635 2 9292 9274 9287 927 dependency networks 1 9271 9291 9281 2534 2 9261 9295 9278 1893 easiestfirst 1 9317 9304 9311 2441 2 9335 9332 9333 1248 full bidirectional 1 9329 9314 9321 712 2 9326 9312 9319 48 startend lefttoright 1 9298 9269 9283 861 2 9296 9267 9281 439 righttoleft 1 9292 9283 9287 887 2 9289 9274 9282 451 dependency networks 1 8710 8956 8832 1894 2 8716 8944 8828 331 easiestfirst 1 9333 9295 9314 1950 2 9331 9295 9313 1016 full bidirectional 1 9352 9326 9339 392 2 9344 9320 9332 4 table 5 chunking fscores on the development setmethod recall precision fscore svm 9351 9345 9348 svm voting 9392 9389 9391 regularized winnow 9360 9354 9357 perceptron 9329 9419 9374 easiestfirst 9359 9368 9363 full bidirectional 9370 9365 9370 table 6 chunking fscores on the test set pairwise voting between lefttoright and rightto left taggerskudo et al attained performance improvement in chunking by conducting weighted voting of multiple svms trained with distinct chunk representationsthe biggest difference between ourapproach and such voting methods is that the lo cal classifier in our bidirectional inference methodscan have rich information for decisionalso vot ing methods generally need many tagging processes to be run on a sentence which makes it difficult to build a fast taggerour algorithm can be seen as an ensemble classi fier by which we choose the highest probability oneamong the different taggers with all possible decom position structuresalthough choosing the highest probability one is seemingly natural and one of the simplest ways for combining the outputs of differenttaggers one could use a different method investigating the methods for combination should be an interesting direction of future workas for the computational cost for training our methods require us to train 22n types of classifiers when we adopt an nth order markov assumptioninmany cases a secondorder model is sufficient because further increase of n has little impact on per formancethus the training typically takes four or 16 times as much time as it would take for training a single unidirectional tagger which looks somewhatexpensivehowever because each type of classi fier can be trained independently the training can be performed completely in parallel and run with the same amount of memory as that for training a single classifierthis advantage contrasts with the case for crfs which requires substantial amount ofmemory and computational cost if one tries to incor porate higherorder features about tag sequencestagging speed is another important factor inbuilding a practical tagger for largescale text min 473 ingour inference algorithm with the easiestfirst strategy needs no viterbi decoding unlike memms and crfs and makes it possible to perform very fast tagging with high precisionwe have presented a bidirectional inference algo rithm for sequence labeling problems such as postagging named entity recognition and text chunkingthe algorithm can enumerate all possible decomposition structures and find the highest prob ability sequence together with the corresponding decomposition structure in polynomial timewehave also presented an efficient bidirectional infer ence algorithm based on the easiestfirst strategywhich gives comparable performance to full bidirectional inference with significantly lower compu tational costexperimental results of pos tagging and textchunking show that the proposed bidirectional inference methods consistently outperform unidi rectional inference methods and our bidirectional memms give comparable performance to thatachieved by stateoftheart learning algorithms in cluding kernel support vector machinesa natural extension of this work is to replace the maximum entropy modeling which was used asthe local classifiers with other machine learning algorithmssupport vector machines with appropri ate kernels is a good candidate because they havegood generalization performance as a single classi fieralthough svms do not output probabilities theeasiestfirst method would be easily applied by considering the margins output by svms as the confi dence of local classification
H05-1059
bidirectional inference with the easiestfirst strategy for tagging sequence datathis paper presents a bidirectional inference algorithm for sequence labeling problems such as partofspeech tagging named entity recognition and text chunkingthe algorithm can enumerate all possible decomposition structures and find the highest probability sequence together with the corresponding decomposition structure in polynomial timewe also present an efficient decoding algorithm based on the easiestfirst strategy which gives comparably good performance to full bidirectional inference with significantly lower computational costexperimental results of partofspeech tagging and text chunking show that the proposed bidirectional inference methods consistently outperform unidirectional inference methods and bidirectional memms give comparable performance to that achieved by stateoftheart learning algorithms including kernel support vector machineswe propose easiestfirst deterministic decoding
nonprojective dependency parsing using spanning tree algorithms we formalize weighted dependency pars ing as searching for maximum spanning trees in directed graphs using this representation the parsing algorithmof eisner is sufficient for search ing over all projective trees in o time more surprisingly the representation isextended naturally to nonprojective pars ing using chuliuedmonds mst algorithm yielding an o parsing al gorithm we evaluate these methodson the prague dependency treebank using online largemargin learning tech niques and show that mst parsingincreases efficiency and accuracy for lan guages with nonprojective dependencies dependency parsing has seen a surge of interest lately for applications such as relation extraction machine translation synonym genera tion and lexical resource augmentation the primary reasons for using dependency structures instead of more informative lexicalized phrase structures is that they are more efficient to learn and parse whilestill encoding much of the predicateargument infor mation needed in applicationsroot john hit the ball with the bat figure 1 an example dependency treedependency representations which link words to their arguments have a long history figure 1 shows a dependency tree for the sentence john hit the ball with the batwe restrict ourselvesto dependency tree analyses in which each word de pends on exactly one parent either another word or a dummy root symbol as shown in the figurethe tree in figure 1 is projective meaning that if we put the words in their linear order preceded by the root theedges can be drawn above the words without cross ings or equivalently a word and its descendants form a contiguous substring of the sentencein english projective trees are sufficient to ana lyze most sentence typesin fact the largest sourceof english dependency trees is automatically gener ated from the penn treebank and is by convention exclusively projectivehowever there are certain examples in which a non projective tree is preferableconsider the sentencejohn saw a dog yesterday which was a yorkshire ter rierhere the relative clause which was a yorkshireterrier and the object it modifies are sep arated by an adverbthere is no way to draw the dependency tree for this sentence in the plane withno crossing edges as illustrated in figure 2in lan guages with more flexible word order than english such as german dutch and czech nonprojective dependencies are more frequentrich inflection systems reduce reliance on word order to express 523 root john saw a dog yesterday which was a yorkshire terrier root o to novevetsinou nemaani zajem a taky na to vetsinou nemapenze he is mostly not even interested in the new things and in most cases he has no money for it eitherfigure 2 nonprojective dependency trees in english and czechgrammatical relations allowing nonprojective dependencies that we need to represent and parse ef ficientlya nonprojective example from the czech prague dependency treebank is also shown in figure 2most previous dependency parsing models have focused on projective trees including the work of eisner collins et al yamada and matsumoto nivre and scholz and mcdonald et al these systems have shown that accurate projective dependency parsers can be automatically learned from parsed datahowever nonprojective analyses have recently attracted some interest not only for languages with freer word order but also for englishin particular wang and harper describe a broad coverage nonprojectiveparser for english based on a handconstructed constraint dependency grammar rich in lexical and syntactic informationnivre and nilsson presented a parsing model that allows for the introduc tion of nonprojective edges into dependency trees through learned edge transformations within their memorybased parserthey test this system onczech and show improved accuracy relative to a projective parserour approach differs from those ear lier efforts in searching optimally and efficiently the full space of nonprojective treesthe main idea of our method is that dependencyparsing can be formalized as the search for a maximum spanning tree in a directed graphthis formalization generalizes standard projective parsing mod els based on the eisner algorithm toyield efficient o exact parsing methods for nonprojective languages like czechusing this spanning tree representation we extend the work of mcdonald et al on online largemargin discriminative training methods to nonprojective depen denciesthe present work is related to that of hirakawa who like us reduces the problem of depen dency parsing to spanning tree searchhowever his parsing method uses a branch and bound algorithm that is exponential in the worst case even thoughit appears to perform reasonably in limited experi mentsfurthermore his work does not adequately address learning or measure parsing accuracy on heldout datasection 2 describes an edgebased factorizationof dependency trees and uses it to equate depen dency parsing to the problem of finding maximumspanning trees in directed graphssection 3 out lines the online largemargin learning framework used to train our dependency parsersfinally in section 4 we present parsing results for czechthe trees in figure 1 and figure 2 are untyped that is edges are not partitioned into types representingadditional syntactic information such as grammati cal functionwe study untyped dependency treesmainly but edge types can be added with simple ex tensions to the methods discussed here21 edge based factorizationin what follows x x1 xn represents a genericinput sentence and y represents a generic depen dency tree for sentence x seeing y as the set of tree edges we write y if there is a dependency in y from word xi to word xj in this paper we follow a common method of fac toring the score of a dependency tree as the sum of the scores of all edges in the treein particular wedefine the score of an edge to be the dot product be 524 tween a high dimensional feature representation of the edge and a weight vector s w f thus the score of a dependency tree y for sentence x is s y s y w f assuming an appropriate feature representation as well as a weight vector w dependency parsing is the task of finding the dependency tree y with highest score for a given sentence x for the rest of this section we assume that the weight vector w is known and thus we know the score s of each possible edgein section 3 we present a method for learning the weight vector22 maximum spanning treeswe represent the generic directed graph g by its vertex set v v1 vn and set e 1 n1 n of pairs of directed edges vi vj each such edge has a score ssince g is di rected s does not necessarily equal sa maximum spanning tree of g is a tree y e that maximizes the value y s such thatevery vertex in v appears in y the maximum pro jective spanning tree of g is constructed similarlyexcept that it can only contain projective edges rel ative to some total order on the vertices of g the mst problem for directed graphs is also known as the maximum arborescence problemfor each sentence x we define the directed graph gx given by vx x0 root x1 xn ex i 6 j 0 n 1 n that is gx is a graph with the sentence words and the dummy root symbol as vertices and a directed edge between every pair of distinct words and fromthe root symbol to every wordit is clear that dependency trees for x and spanning trees for gx co incide since both kinds of trees are required to be rooted at the dummy root and reach all the wordsin the sentencehence finding a depen dency tree with highest score is equivalent to finding a maximum spanning tree in gxchuliuedmonds graph g edge weight function s e r 1let m x v x arg maxxs2let gm 4otherwise find a cycle c in gm5let gc contract6let y chuliuedmonds7find a vertex x c s t y c 8return y c contract c s 1let gc be the subgraph of g excluding nodes in c 2add a node c to gc representing cycle c add edge to gc with s maxxc s 4for x v c xce add edge to gc with s maxxc ss x s where a is the predecessor of v in c and s pvc s v 5return gc figure 3 chuliuedmonds algorithm for finding maximum spanning trees in directed graphs221 nonprojective trees to find the highest scoring nonprojective tree we simply search the entire space of spanning trees with no restrictionswellknown algorithms exist for theless general case of finding spanning trees in undi rected graphs efficient algorithms for the directed case are less well known but they existwe will use here the chuliuedmonds algorithm sketched in figure 3 follow ing leonidas informally the algorithm has each vertex in the graph greedily select the incoming edge with highest weightif a tree results it must be the maximum spanning treeif not there must be a cyclethe procedure identifies a cycle and contracts it into a single vertex and recalculates edge weights going into and out of the cycleit can be shown that a maximum spanning tree on the contracted graph isequivalent to a maximum spanning tree in the orig inal graph hence the algorithm can recursively call itself on the new graphnaivelythis algorithm runs in o time since each recur sive call takes o to find the highest incoming edge for each word and to contract the graphthere are at most o recursive calls since we cannot contract the graph more then n timeshowever 525 tarjan gives an efficient implementation of the algorithm with o time complexity for dense graphs which is what we need hereto find the highest scoring nonprojective tree for a sentence x we simply construct the graph gx and run it through the chuliuedmonds algorithmthe resulting spanning tree is the best nonprojective dependency treewe illustrate here the application of the chuliuedmonds algorithm to dependency parsing on the simple example x john saw mary with directed graph representation gx root saw john mary 10 9 9 30 3020 3 0 11 the first step of the algorithm is to find for each word the highest scoring incoming edge root saw john mary30 3020 if the result were a tree it would have to be the maximum spanning treehowever in this case we have a cycle so we will contract it into a single node and recalculate edge weights according to figure 3root saw john mary 40 9 30 31 wjs the new vertex wjs represents the contraction of vertices john and sawthe edge from wjs to mary is 30 since that is the highest scoring edge from any vertex in wjsthe edge from root into wjs is set to40 since this represents the score of the best span ning tree originating from root and including only the vertices in wjsthe same leads to the edge from mary to wjsthe fundamental property of the chuliuedmonds algorithm is that an mst in thisgraph can be transformed into an mst in the orig inal graph thus we recursively call the algorithm on this graphnote that we need to keep track of the real endpoints of the edges into and out of wjs for reconstruction laterrunning the algorithm we must find the best incoming edge to all words root saw john mary 40 30 wjs this is a tree and thus the mst of this graphwe now need to go up a level and reconstruct the graphthe edge from wjs to mary originally was from the word saw so we include that edgefurthermore the edge from root to wjs represented a tree from root to saw to john so we include all those edges to get the final mst root saw john mary 10 3030 a possible concern with searching the entire spaceof spanning trees is that we have not used any syntactic constraints to guide the searchmany lan guages that allow nonprojectivity are still primarily projectiveby searching all possible nonprojective trees we run the risk of finding extremely bad treeswe address this concern in section 4222 projective treesit is well known that projective dependency pars ing using edge based factorization can be handledwith the eisner algorithm this algorithm has a runtime of o and has been employed successfully in both generative and discrimi native parsing models furthermore it is trivial to show that the eisner algorithm solves the maximum projective spanning tree problemthe eisner algorithm differs significantly from the chuliuedmonds algorithmfirst of all it is abottomup dynamic programming algorithm as opposed to a greedy recursive onea bottomup al gorithm is necessary for the projective case since it must maintain the nested structural constraint which is unnecessary for the nonprojective case23 dependency trees as msts summaryin the preceding discussion we have shown that nat ural language dependency parsing can be reduced to finding maximum spanning trees in directed graphsthis reduction results from edgebased factoriza tion and can be applied to projective languages with 526the eisner parsing algorithm and nonprojective languages with the chuliuedmonds maximum span ning tree algorithmthe only remaining problem is how to learn the weight vector w a major advantage of our approach over other dependency parsing models is its uniformity and simplicityby viewing dependency structures asspanning trees we have provided a general framework for parsing trees for both projective and non projective languagesfurthermore the resultingparsing algorithms are more efficient than lexi calized phrase structure approaches to dependencyparsing allowing us to search the entire space with out any pruningin particular the nonprojective parsing algorithm based on the chuliuedmondsmst algorithm provides true nonprojective parsingthis is in contrast to other nonprojective meth ods such as that of nivre and nilsson who implement nonprojectivity in a pseudoprojective parser with edge transformationsthis formulation also dispels the notion that nonprojective parsing isharderthan projective parsingin fact it is easier since nonprojective parsing does not need to en force the noncrossing constraint of projective treesas a result nonprojective parsing complexity is justo against the o complexity of the eisner dynamic programming algorithm which by con struction enforces the noncrossing constraintin this section we review the work of mcdonald etal for online largemargin dependency pars ingas usual for supervised learning we assume a training set t tt1 consisting of pairs of a sentence xt and its correct dependency tree ytin what follows dt denotes the set of possible dependency trees for sentence x the basic idea is to extend the margin infused relaxed algorithm to learning with struc tured outputs in the present case dependency treesfigure 4 gives pseudocode for the mira algorithmas presented by mcdonald et al an on line learning algorithm considers a single training instance at each update to w the auxiliary vector v accumulates the successive values of w so that thefinal weight vector is the average of the weight vec training data t tt1 1w0 0 v 0 i 0 2for n 1n 3for t 1t 4min w w st s sl y dt 5v v w 6i i 1 7w v figure 4 mira learning algorithmtors after each iterationthis averaging effect has been shown to help overfitting on each update mira attempts to keep the new weight vector as close as possible to the old weight vector subject to correctly classifying the instance under consideration with a margin given by the loss of the incorrect classificationsfor dependency trees the loss of a tree is defined to be the number of words with incorrect parents relative to the correct treethis is closely related to the hamming loss that is often used for sequences for arbitrary inputs there are typically exponen tially many possible parses and thus exponentially many margin constraints in line 4 of figure 431 singlebest miraone solution for the exponential blowup in number of trees is to relax the optimization by using only the single margin constraint for the tree with the highest score sthe resulting online update would then be min w w st s s l where y arg maxys mcdonald et al used a similar update with k constraints for the k highestscoring trees and showed that small values of k are sufficient toachieve the best accuracy for these methodshowever here we stay with a single best tree because k best extensions to the chuliuedmonds algorithm are too inefficient this model is related to the averaged perceptron algorithm of collins in that algorithm the single highest scoring tree is used toupdate the weight vectorhowever mira aggres sively updates w to maximize the margin between 527 the correct tree and the highest scoring tree which has been shown to lead to increased accuracy32 factored mirait is also possible to exploit the structure of the output space and factor the exponential number of mar gin constraints into a polynomial number of local constraints for the directed maximum spanning tree problemwe can factor the output by edges to obtain the fol lowing constraints min w w st s s 1 yt yt this states that the weight of the correct incomingedge to the word xj and the weight of all other in coming edges must be separated by a margin of 1it is easy to show that when all these constraintsare satisfied the correct spanning tree and all incor rect spanning trees are separated by a score at least as large as the number of incorrect incoming edgesthis is because the scores for all the correct arcs can cel out leaving only the scores for the errors causingthe difference in overall scoresince each single er ror results in a score increase of at least 1 the entirescore difference must be at least the number of er rorsfor sequences this form of factorization has been called local lattice preference let n be the number of nodes in graph gxthen the number of constraints is o since for each node we must maintain n 1 constraintsthe factored constraints are in general more re strictive than the original constraints so they mayrule out the optimal solution to the original problemmcdonald et al examines briefly factored mira for projective english dependency pars ing but for that application kbest mira performs as well or better and is much faster to trainwe performed experiments on the czech prague de pendency treebank we used the predefined training develop ment and testing split of this data setfurthermore we used the automatically generated pos tags that are provided with the dataczech pos tags are very complex consisting of a series of slots that may ormay not be filled with some valuethese slots rep resent lexical and grammatical properties such as standard pos case gender and tensethe result is that czech pos tags are rich in information but quite sparse when viewed as a wholeto reduce sparseness our features rely only on the reducedpos tag set from collins et al the num ber of features extracted from the pdt training set was 13 450 672 using the feature set outlined by mcdonald et al czech has more flexible word order than englishand as a result the pdt contains nonprojective de pendencieson average 23 of the sentences in the training development and test sets have at least one nonprojective dependencyhowever less than2 of total edges are actually nonprojectivethere fore handling nonprojective edges correctly has a relatively small effect on overall accuracyto show the effect more clearly we created two czech data setsthe first czecha consists of the entire pdtthe second czechb includes only the 23 of sen tences with at least one nonprojective dependencythis second set will allow us to analyze the effectiveness of the algorithms on nonprojective mate rialwe compared the following systems 1coll1999 the projective lexicalized phrasestructureparser of collins et al 2nn2005 the pseudoprojective parser of nivre and nilsson 3mcd2005 the projective parser of mcdonald et al that uses the eisner algorithm for both training and testingthis system uses kbest mira with k54 singlebest mira in this system we use the chuliuedmonds algorithm to find the best dependency tree for singlebest mira training and testingbased on edge factorization as described in section 32we use the chuliuedmonds algorithm to find the best tree for the test data41 resultsresults are shown in table 1there are two mainmetricsthe first and most widely recognized is ac curacy which measures the number of words that correctly identified their parent in the treecompletemeasures the number of sentences in which the re sulting tree was completely correctclearly there is an advantage in using the chuliuedmonds algorithm for czech dependency pars 528 czecha czechb accuracy complete accuracy completecoll1999 828 nn2005 800 318 mcd2005 833 313 748 00 singlebest mira 841 322 810 149 factored mira 844 323 815 143 table 1 dependency parsing results for czechczechb is the subset of czecha containing only sentences with at least one nonprojective dependencyingeven though less than 2 of all dependenciesare nonprojective we still see an absolute improve ment of up to 11 in overall accuracy over the projective modelfurthermore when we focus on the subset of data that only contains sentences with at least one nonprojective dependency the effect is amplifiedanother major improvement here isthat the chuliuedmonds nonprojective mst al gorithm has a parsing complexity of o versusthe o complexity of the projective eisner algo rithm which in practice leads to improvements in parsing timethe results also show that in terms of accuracy factored mira performs better than singlebest mirahowever for the factored modelwe do have o margin constraints which re sults in a significant increase in training time over singlebest mirafurthermore we can also see that the mst parsers perform favorably compared to the more powerful lexicalized phrasestructure parsers such as those presented by collins et al andzeman that use expensive o parsing al gorithmswe should note that the results in collins et al are different then reported here due to different training and testing data setsone concern raised in section 221 is that search ing the entire space of nonprojective trees couldcause problems for languages that are primarily projectivehowever as we can see this is not a prob lemthis is because the model sets its weights with respect to the parsing algorithm and will disfavor features over unlikely nonprojective edgessince the space of projective trees is a subset ofthe space of nonprojective trees it is natural to won der how the chuliuedmonds parsing algorithm performs on projective data since it is asymptotically better than the eisner algorithmtable 2 shows theresults for english projective dependency trees ex tracted from the penn treebank using the rules of yamada and matsumoto english accuracy complete mcd2005 909 375 singlebest mira 902 332 factored mira 902 323table 2 dependency parsing results for english us ing spanning tree algorithmsthis shows that for projective data sets training and testing with the chuliuedmonds algorithm is worse than using the eisner algorithmthis is notsurprising since the eisner algorithm uses the a pri ori knowledge that all trees are projectivewe presented a general framework for parsing dependency trees based on an equivalence to maximum spanning trees in directed graphsthis frame work provides natural and efficient mechanismsfor parsing both projective and nonprojective languages through the use of the eisner and chuliu edmonds algorithmsto learn these structures we used online largemargin learning that empirically provides stateoftheart per formance for czecha major advantage of our models is the ability to naturally model nonprojective parsesnon projective parsing is commonly considered more difficult than projective parsinghowever under our framework we show that the opposite is actuallytrue that nonprojective parsing has a lower asymptotic complexityusing this framework we pre sented results showing that the nonprojective modeloutperforms the projective model on the prague de pendency treebank which contains a small number of nonprojective edgesour method requires a tree score that decomposes according to the edges of the dependency treeone might hope that the method would generalize to 529include features of larger substructuresunfortu nately that would make the search for the best tree intractable acknowledgments we thank lillian lee for bringing an importantmissed connection to our attention and koby cram mer for his help with learning algorithmsthis work has been supported by nsf itr grants 0205448 and 0428193
H05-1066
nonprojective dependency parsing using spanning tree algorithmswe formalize weighted dependency parsing as searching for maximum spanning trees in directed graphsusing this representation the parsing algorithm of eisner is sufficient for searching over all projective trees in o timemore surprisingly the representation is extended naturally to nonprojective parsing using chuliuedmonds mst algorithm yielding an o parsing algorithmwe evaluate these methods on the prague dependency treebank using online largemargin learning techniques and show that mst parsing increases efficiency and accuracy for languages with nonprojective dependenciesthe key idea is to build a complete graph consisting of tokens of the sentence where each edge is weighted by a learned scoring function
emotions from text machine learning for textbased emotion prediction in addition to information text con tains attitudinal and more specifically emotional content this paper exploresthe textbased emotion prediction prob lem empirically using supervised machinelearning with the snow learning architecture the goal is to classify the emotional affinity of sentences in the narra tive domain of childrens fairy tales forsubsequent usage in appropriate expressive rendering of texttospeech synthe sis initial experiments on a preliminarydata set of 22 fairy tales show encourag ing results over a nave baseline and bow approach for classification of emotional versus nonemotional contents with some dependency on parameter tuning we also discuss results for a tripartite model which covers emotional valence as well as feature set alernations in addition we present plans for a more cognitively soundsequential model taking into considera tion a larger set of basic emotions text does not only communicate informative con tents but also attitudinal information includingemotional statesthe following reports on an them pirical study of textbased emotion predictionsection 2 gives a brief overview of the intendedapplication area whereas section 3 summarizes re lated worknext section 4 explains the empirical study including the machine learning model thecorpus the feature set parameter tuning etc section 5 presents experimental results from two classi fication tasks and feature set modificationssection 6 describes the agenda for refining the model before presenting concluding remarks in 7narrative text is often especially prone to having emotional contentsin the literary genre of fairy tales emotions such as happiness and anger and related cognitive states eg love or hate becomeintegral parts of the story plot and thus are of particular importancemoreover the story teller read ing the story interprets emotions in order to orally convey the story in a fashion which makes the story come alive and catches the listenersattentionin speech speakers effectively express emotions by modifying prosody including pitch intensity and durational cues in the speech signalthus inorder to make texttospeech synthesis sound as natural and engaging as possible it is important to con vey the emotional stance in the texthowever thisimplies first having identified the appropriate emo tional meaning of the corresponding text passagethus an application for emotional texttospeech synthesis has to solve two basic problemsfirstwhat emotion or emotions most appropriately de scribe a certain text passage and second given a text passage and a specified emotional markup how to render the prosodic contour in order to convey the emotional content the textbased emotion prediction task addresses the first of these two problems579for a complete general overview of the field of affective computing see is a rare study in textbased inference of sentencelevel emotional affin itythe authors adopt the notion of basic emotions cf and use six emotion categories anger disgust fear happiness sadness surprisethey critique statistical nlp for being unsuccessful at the small sentence level and insteaduse a database of commonsense knowledge and create affect models which are combined to form a rep resentation of the emotional affinity of a sentenceat its core the approach remains dependent on anemotion lexicon and handcrafted rules for conceptual polarityin order to be effective emotion recog nition must go beyond such resources the authors note themselves that lexical affinity is fragilethe method was tested on 20 userspreferences for an emailclient based on usercomposed text emails describing short but colorful eventswhile the users preferred the emotional client this evaluation does not reveal emotion classification accuracy nor how well the model generalizes on a large data setwhereas work on emotion classification fromthe point of view of natural speech and human computer dialogues is fairly extensive eg this appears not to be the case for texttospeech synthe sis a short study by addresses sentencelevel emotion recognition forjapanese ttstheir model uses a composition as sumption the emotion of a sentence is a function of the emotional affinity of the words in the sentencethey obtain emotional judgements of 73 adjectives and a set of sentences from 15 human subjects andcompute wordsemotional strength based on the ra tio of times a word or a sentence was judged to fall into a particular emotion bucket given the number of human subjectsadditionally they conducted aninteractive experiment concerning the acoustic ren dering of emotion using manual tuning of prosodicparameters for japanese sentenceswhile the au thors actually address the two fundamental problems of emotional tts their approach is impractical and most likely cannot scale up for a real corpusagain while lexical items with clear emotional meaningsuch as happy or sad matter emotion classifica tion probably needs to consider additional inferencemechanismsmoreover a nave compositional ap proach to emotion recognition is risky due to simplelinguistic facts such as contextdependent seman tics domination of words with multiple meanings and emotional negationmany nlp problems address attitudinal mean ing distinctions in text eg detecting subjective opinion documents or expressions eg measuring strength of subjective clauses determining word polarity or textsattitudinal valence eg here it suffices to say that the targets the domain and the intended application differ our goal is to classify emotional text passagesin childrens stories and eventually use this information for rendering expressive childdirected sto rytelling in a texttospeech applicationthis can be useful eg in therapeutic education of children with communication disorders this part covers the experimental study with a formal problem definition computational implementa tion data features and a note on parameter tuning41 machine learning modeldetermining emotion of a linguistic unit can be cast as a multiclass classification problemforthe flat case let t denote the text and s an them bedded linguistic unit such as a sentence where s t let k be the number of emotion classes e em1 em2 emk where em1 denotes the special case of neutrality or absence of emotionthe goal is to determine a mapping function f s emi such that we obtain an ordered labeled pair the mapping is based on f f1 f2 fn where f contains the features derived from the textfurthermore if multiple emotion classes can characterize s then given ee the target of the mapping function becomes the ordered pair finally as further discussed in section 6 the hierarchical case of label assignment requires a sequen 580tial model that further defines levels of coarse ver sus finegrained classifiers as done by for the question classification problem42 implementationwhereas our goal is to predict finer emotional mean ing distinctions according to emotional categories in speech in this study we focus on the basic task of recognizing emotional passages and on determining their valence becausewe currently do not have enough training data to ex plore finergrained distinctionsthe goal here is to get a good understanding of the nature of the tep problem and explore features which may be usefulwe explore two cases of flat classification using a variation of the winnow update rule implemented in the snow learning architecture 1 which learns a linear classifierin feature space and has been successful in sev eral nlp applications eg semantic role labeling in the first case the set of emotion classes e consists of emotional versus nonemotional or neutral ie e nein the second case e has been incremented with emotional distinctions accordingto the valence ie e npeneexperi ments used 10fold crossvalidation with 90 train and 10 test data2 43 datathe goal of our current data annotation project is to annotate a corpus of approximately 185 children stories including grimms hc andersens and bpotters storiesso far the annotation process pro ceeds as follows annotators work in pairs on the same storiesthey have been trained separately andwork independently in order to avoid any annota tion bias and get a true understanding of the task difficultyeach annotator marks the sentence levelwith one of eight primary emotions see table 1 re flecting an extended set of basic emotions in order to make the annotation process more focused emotion is annotated from the point of view of the text ie the feeler in the sentencewhile the primary emotions are targets the sentences are also 1available from httpl2rcsuiuceducogcomp2experiments were also run for perceptron however the re sults are not includedoverall perceptron performed worsemarked for other affective contents ie background mood secondary emotions via intensity feeler andtextual cuesdisagreements in annotations are re solved by a second pass of tiebreaking by the first author who chooses one of the competing labelseventually the completed annotations will be made availabletable 1 basic emotions used in annotation abbreviation emotion class a angry d disgusted f fearful h happy sa sad su positively surprised su negatively surprisedemotion annotation is hard interannotator agreement currently range at 24 51 with the ra tio of observed annotation overlap ranging between4564 depending on annotator pair and stories as signedthis is expected given the subjective natureof the annotation taskthe lack of a clear defini tion for emotion vs nonemotion is acknowledgedacross the emotion literature and contributes to dy namic and shifting annotation targetsindeed acommon source of confusion is neutral ie de ciding whether or not a sentence is emotional or nonemotionalemotion perception also depends on which characters pointofview the annotator takesand on extratextual factors such as annotators per sonality or moodit is possible that by focusing more on the training of annotator pairs particularlyon joint training agreement might improvehowever that would also result in a bias which is prob ably not preferable to actual perceptionmoreoverwhat agreement levels are needed for successful ex pressive tts remains an empirical questionthe current data set consisted of a preliminary an notated and tiebroken data set of 1580 sentence or 22 grimmstalesthe label distribution is in table2neutral was most frequent with 5994table 2 percent of annotated labels a d f h 1234 089 703 677n sa su su5994 734 259 310 581 table 3 emotional vs neutral examples e n 4006 5994 table 4 positive vs negative vs neutral pe ne n 987 3019 5994 next for the purpose of this study all emotionalclasses ie a d f h sa su su were com bined into one emotional superclass e for the firstexperiment as shown in table 3for the second experiment we used two emotional classes ie pos itive versus negative emotions peh su and nea d f sa su as seen in table 444 feature setthe feature extraction was written in pythonsnow only requires active features as input which resulted in a typical feature vector size of around 30 featuresthe features are listed belowthey were imple mented as boolean values with continuous valuesrepresented by rangesthe ranges generally over lapped in order to get more generalization coverage1first sentence in story2conjunctions of selected features 3direct speech in sentence4thematic story type 7sentence length in words 8ranges of story progress 9percent of jj n v rb 10v count in sentence excluding participles 11positive and negative word counts 12wordnet emotion words13interjections and affective words14content bow n v jj rb words by posfeature conjunctions covered pairings of counts of positive and negative words with range of story progress or interjections respectivelyfeature groups 1 3 5 6 7 8 9 10 and 14 are extracted automatically from the sentences in the sto ries with the snow postagger used for features 9 10 and 14group 10 reflects how many verbs are active in a sentencetogether with the quotation and punctuation verb domination intends to capture the assumption that emotion is often accompanied by increased action and interactionfeature group 4 is based on finish scholar antti aarnes classesof folktale types according to their informative the matic contents the current tales have 3 top story types and 15 subtypes this feature intends to provide an idea about the storys general affectivepersonality whereas the feature re flecting the story progress is hoped to capture that some emotions may be more prevalent in certain sections of the story for semantic tasks words are obviously impor tantin addition to considering content words we also explored specific word listsgroup 11 uses 2 lists of 1636 positive and 2008 negative words obtained from group 12 uses lexical lists extracted from wordnet on the basis of the primary emotion wordsin their adjectival and nominal formsfor the adjectives pywordnets simi lar feature was used to retrieve similar items ofthe primary emotion adjectives exploring one addi tional level in the hierarchy for the nouns andany identical verbal homonyms synonyms and hy ponyms were extracted manually3 feature group 13used a short list of 22 interjections collected manu ally by browsing educational esl sites whereas theaffective word list of 771 words consisted of a combination of the nonneutral words from and only a subset of these lexical lists actually occurred4 3multiwords were transformed to hyphenated form4at this point neither stems and bigrams nor a list of ono matopoeic words contribute to accuracyintermediate resource processing inserted some feature noise582 the above feature set is henceforth referred to as all features whereas content bow is just group 14the content bow is a more interesting baseline than the nave one p ie always assigning the most likely neutral categorylastly emotions blend and transform thus emotion and background mood of i am mediately adjacent sentences ie the sequencing seems importantat this point it is not implemented automaticallyinstead it was extracted from themanual emotion and mood annotationsif sequenc ing seemed important an automatic method using sequential target activation could be added next45 parameter tuningthe winnow parameters that were tuned included promotional demotional activation threshold initial weights and the regularization parame ter s which implements a margin between positive and negative examplesgiven the currently fairlylimited data results from 2 alternative tuning meth ods applied to all features are reportedfor the condition called septuneeval 50 of the sentences were randomly selected and set aside to be used for the parameter tuningprocess onlyof this subset 10 were subsequently randomly chosen as test set with the remaining 90 used for training during the automatic tuning process which covered 4356 different parameter combinationsresulting pa rameters were 11 05 5 10 s 05the remaining half of the data was used for training and testing in the 10fold crossvalidation evaluation in table 5 due to randomly splitting the datagiven that the data set is currently small for the condition named sametuneeval tuning was performed automatically on all data using a slightly smaller set of combinations and thenmanually adjusted against the 10fold cross validation processresulting parameters were 12 09 4 1 s 05all data was used for evaluationemotion classification was sensitive to the selected tuning datagenerally a smaller tuning set resultedin pejorative parameter settingsthe random selec tion could make a difference but was not explored5 results and discussionthis section first presents the results from experiments with the two different confusion sets de scribed above as well as feature experimentation51 classification resultsaverage accuracy from 10fold cross validation forthe first experiment ie classifying sentences as either neutral or emotional are included in ta ble 5 and figure 1 for the two tuning conditions on the main feature sets and baselinesas expected table 5 mean classification accuracy n vs e 2 conditions sametuneeval septuneeval p 5994 6005 content bow 6101 5830 all features except bow 6468 6345 all features 6899 6331 all features sequencing 6937 6294 degree of success reflects parameter settings bothfor content bow and all featuresnevertheless un der these circumstances performance above a navebaseline and a bow approach is obtainedmore over sequencing shows potential for contributing in one casehowever observations also point to three issues first the current data set appears tobe too smallsecond the data is not easily separa blethis comes as no surprise given the subjectivenature of the task and the rather low interannota tor agreement reported abovemoreover despite the schematic narrative plots of childrens stories tales still differ in their overall affective orientation which increases data complexitythird and finally the emotion class is combined by basic emotion labels rather than an original annotated labelmore detailed averaged results from 10fold crossvalidation are included in table 6 using all features and the separated tuning and evaluationdata condition septuneevalwith these parame ters approximately 3 improvement in accuracy over the nave baseline p was recorded and 5 over the content bow which obviously did poorly with these parametersmoreover precision is 583 0 10 20 30 40 50 60 70 sametuneeval septuneeval tuning sets accuracy p content bowall features except bow all featuresall features sequencing figure 1 accuracy under different conditions table 6 classifying n vs e measure n e averaged accuracy 063 063 averaged error 037 037 averaged precision 066 056 averaged recall 075 042 averaged fscore 070 047 higher than recall for the combined emotion classin comparison with the sametuneeval procedure the accuracy improved by approximately 9 over p and by 8 over content bowin the second experiment the emotion category was split into two classes emotions with positiveversus negative valencethe results in terms of precision recall and fscore are included in table 7 us ing all features and the septuneeval conditionthedecrease in performance for the emotion classes mir rors the smaller amounts of data available for each classas noted in section 43 only 987 of the sentences were annotated with a positive emotionand the results for this class are worsethus perfor mance seems likely to improve as more annotated story data becomes available at this point we are experimenting with merely around 12 of the total texts targeted by the data annotation project52 feature experimentsemotions are poorly understood and it is espe cially unclear which features may be important for their recognition from textthus we experimented table 7 n pe and ne n ne pe averaged precision 064 045 013 averaged recall 075 027 019 averaged fscore 069 032 013 table 8 feature group members word lists interj wordnet affective lists posneg syntactic length ranges pos vcount ranges storyrelated storyprogress 1st sent story type orthographic punctuation uppercase words quote conjunctions conjunctions with posneg content bow words with different feature configurationsstarting with all features again using 10fold crossvalidation forthe separated tuningevaluation condition septuneeval one additional feature group was removed un til none remainedthe feature groups are listed intable 8figure 2 on the next page shows the accuracy at each step of the cumulative subtraction processwhile some feature groups eg syntactic ap peared less important the removal order matteredeg if syntactic features were removed first accuracy decreasedthis fact also illustrated that fea tures work together removing any group degraded performance because features interact and there isno true independenceit was observed that featurescontributions were sensitive to parameter tun ingclearly further work on developing features which fit the tep problem is neededthis was a first passof addressing tep for ttsat this point the annotation project is still ongoing and we only had a fairly small data set to draw onnevertheless results indicate that our learning ap proach benefits emotion recognitionfor example the following instances also labeled with the same valence by both annotators were correctly classifiedboth in the binary and the tripartite polar ity task given the separated tuning and evaluation data condition and using all features ene then he offered the dwarfs money and prayed and besought them to let him take her away but they said we will not part with her for all the gold in the world584 cumulative removal of feature groups 6181 6331 6257 5795 5830 5893 5956 55 60 65 all features word lists syntactic storyrelated orthographic conjunctions content words a ccur acy all features p bow figure 2 averaged effect of feature group removal using septuneeval n and so the little girl really did grow up her skin was as white as snow her cheeks as rosy as the blood and her hair as black as ebony and she was called snowdrop ene ahshe answered have i not reason to weep n nevertheless he wished to try him first and took a stone in his hand and squeezed it together so that water dropped out of itcases and are from the wellknown folk tale snowdrop also called snow whiteand are also correctly classified by the sim ple content bow approach although our approach has higher prediction confidence for ene it also considers eg direct speech a fairly high verb count advanced story progress connotative wordsand conjunctions thereof with story progress fea tures all of which the bow missesin addition thesimple content bow approach makes incorrect pre dictions at both the bipartite and tripartite levels forexamples and from the jokes and anec dotes stories clever hans and the valiant littletailor while our classifier captures the affective dif ferences by considering eg distinctions in verbcount interjection pos sentence length connota tions story subtype and conjunctionsnext we intend to use a larger data set to conduct a more complete study to establish mature findingswe also plan to explore finer emotional meaning dis tinctions by using a hierarchical sequential modelwhich better corresponds to different levels of cognitive difficulty in emotional categorization by humans and to classify the full set of basic level emo tional categories discussed in section 43sequential modeling of simple classifiers has been successfully employed to question classification for example by in addition we are working on refining and improving the feature set and given more data tuning can be improved on a sufficiently large development setthe three subcorpora in the annotation project can reveal how authorship affects emotion perception and classificationmoreover arousal appears to be an important dimension for emotional prosody especially in storytelling thus we are planning on exploring degrees of emotional intensity in a learning scenario ie a prob lem similar to measuring strength of opinion clauses finally emotions are not discrete objects rather they have transitional nature and blend and overlap along the temporal dimensionfor example include parallel estimations of emotional activity and include smooth 585 ing techniques such as interpolation and decay to capture sequential and interactive emotional activityobservations from tales indicate that some emotions are more likely to be prolonged than othersthis paper has discussed an empirical study of thetextbased emotion prediction problem in the domain of childrens fairy tales with childdirected ex pressive texttospeech synthesis as goalbesidesreporting on encouraging results in a first set of com putational experiments using supervised machine learning we have set forth a research agenda for tackling the tep problem more comprehensivelywe are grateful to the annotators in particular a rasmussen and s siddiquiwe also thank two anonymous reviewers for commentsthis work was funded by nsf under award itr0205731 and ns itr iis0428472the annotation is supported byuiucs research boardthe authors take sole re sponsibility for the work
H05-1073
emotions from text machine learning for textbased emotion predictionin addition to information text contains attitudinal and more specifically emotional contentthis paper explores the textbased emotion prediction problem empirically using supervised machine learning with the snow learning architecturethe goal is to classify the emotional affinity of sentences in the narrative domain of children fairy tales for subsequent usage in appropriate expressive rendering of texttospeech synthesisinitial experiments on a preliminary data set of 22 fairy tales show encouraging results over a narrative baseline and bow approach for classification of emotional versus nonemotional contents with some dependency on parameter tuningwe also discuss results for a tripartite model which covers emotional valence as well as feature set alternationsin addition we present plans for a more cognitively sound sequential model taking into consideration a larger set of basic emotions
recognising textual entailment with logical inference we use logical inference techniques for recognising textual entailment as the performance of theorem proving turnsout to be highly dependent on not read ily available background knowledge we incorporate model building a technique borrowed from automated reasoning and show that it is a useful robust method to approximate entailment finally we use machine learning to combine these deep semantic analysis techniques with simpleshallow word overlap the resulting hy brid model achieves high accuracy on the rte testset given the state of the art ourresults also show that the different techniques that we employ perform very dif ferently on some of the subsets of the rte corpus and as a result it is useful to use the nature of the dataset as a feature recognising textual entailment is the task to find out whether some text t entails a hypothesis h this task has recently been the focus of a challenge organised by the pascal network in 200451 in example 1550 h follows from t whereas this is not the case in example 7311all examples are from the corpus released as part of the rte challengeit is downloadable from httpwwwpascalnetworkorgchallengesrtethe exam ple numbers have also been kepteach example is marked for entailment as true if h follows from t and false otherwisethe dataset is described in section 41example 1550 t in 1998 the general assembly of the nippon sei ko kai voted to accept female priestsh the anglican church in japan approved the ordination of womenexample 731 t the city tenochtitlan grew rapidly and was the center of the aztecs great empireh tenochtitlan quickly spread over the island marshes and swampsthe recognition of textual entailment is without doubt one of the ultimate challenges for any nlpsystem if it is able to do so with reasonable accuracy it is clearly an indication that it has some thor ough understanding of how language worksindeed recognising entailment bears similarities to turings famous test to assess whether machines can think as access to different sources of knowledge and theability to draw inferences seem to be among the primary ingredients for an intelligent systemmoreover many nlp tasks have strong links to entailment in summarisation a summary should be en tailed by the text paraphrases can be seen as mutualentailment between t and h in ie the extracted in formation should also be entailed by the textin this paper we discuss two methods for recog nising textual entailment a shallow method relyingmainly on word overlap and deep se mantic analysis using stateoftheart offtheshelf inference tools namely a theorem prover and amodel builder these tools rely on dis course representation structures for t and h as well as lexical and world knowledgeto our knowledge few approaches to entailment currently use theorem provers and none incorporate model building both methods are domainindependent to increasetransferrability and have not been tailored to any par ticular test suitein section 4 we test their accuracy and robustness on the rte datasets as one of the few currently available datasets for textual inferencewe also combine the two methods in a hybrid approach using machine learningwe discuss particularly the following questionscan the methods presented improve significantly over the baseline and what are the per formance differences between themdoes thehybrid system using both shallow and deep se mantic analysis improve over the individual use of these methodshow far does deep semantic analysis suffer from a lack of lexical and world knowledge and how can we perform logical inference in the face of potentially large knowledge gapshow does the design of the test suite affect per formanceare there subsets of the test suitethat are more suited to any particular textual en tailment recognition methodwe use several shallow surface features to model the text hypothesis and their relation to each othermost importantly we expect some dependencybetween surface string similarity of text and hypothesis and the existence of entailmentour string sim ilarity measure uses only a form of extended wordoverlap between text and hypothesis taking into account equality of words synonymy and morpholog ical derivationswordnet is usedas the knowledge source for synonymy and deriva tionsthe exact procedure is as followsboth text and hypothesis are tokenised and lem matiseda lemma l1 in the hypothesis is said to be related to a lemma l2 in the text iff l1 and l2 are equal belong to the same wordnet synset are related via wordnet derivations or arerelated via a combination of synonymy and deriva tions no word sense disambiguation is performed and all synsets for a particular lemma are consideredin addition each lemma in the hypothesis is as signed its inverse document frequency accessing the web as corpus via the googleapi as its weightthis standard procedure allows us to assign more importance to less frequent wordsthe overlap measure wnoverlap between text and hypothesis is initialised as zeroshould a lemma in the hypothesis be related to a lemma in the text its weight is added to wnoverlap otherwise it is ignoredin the end wnoverlap is normalised by dividing it by the sum of all weights of the lemmas in the hypothesisthis ensures that wnoverlap isalways a real number between 0 and 1 and also en sures independence of the length of the hypothesisapart from wnoverlap we take into account length of text and hypothesis because in most of the observed cases for true entailments the hypothesis is shorter than the text as it contains less informationthis is covered by three numerical features measuring the length of the text of the hypothesis and the relative length of hypothesis with regard to the text31 semantic interpretationwe use a robust widecoverage ccgparser to generate finegrained semantic representations for each thpairthe semantic representation language is a firstorder fragment of the drs language used in discourse representation theory conveying argument struc ture with a neodavidsonian analysis and includingthe recursive drs structure to cover negation dis junction and implicationconsider for example example 78 t clintons new book is not big seller hereh clintons book is a big sellerdrs x1 x2 x3 book book x1x2 clinton of e4 x5 big seller be agent patient loc drs x1 x2 e3 x4 book clinton of big seller be agent patient 629 proper names and definite descriptions are treated as anaphoric and bound to previously introduceddiscourse referents if possible otherwise accommodatedsome lexical items are specified as presup position triggersan example is the adjective newwhich has a presuppositional reading as shown by the existence of two different bookentities in drsscope is fully specifiedto check whether an entailment holds or not weuse two kinds of automated reasoning tools vam pire a theorem prover and paradox a model builder both tools are developed to deal with inference problems stated in firstorder logicwe use the standard translation from drs to firstorder logic to map our se mantic representation onto the format required by the inference tools32 theorem provinggiven a th pair a theorem prover can be used to find answers to the following conjectures 1t implies h 2th are inconsistent assume that the function drs denotes the drs cor responding to t or h and fol the function that translates a drs into firstorder logicthen if the theorem prover manages to find a proof for folfol we know that we are dealing with a true entailmentin addition to use a theorem prover to detect incon sistencies in a th pair we give it foldrs if the theorem prover returns a proof for we know that t and h are inconsistent and t definitelydoesnt entail h examples the theorem prover will find that t i am plies h for the following examples example 1005 t jessica litman a law professor at michigans wayne state university has specialized in copyright law and internet law for more than 20 yearsh jessica litman is a law professorexample 1977 t his family has steadfastly denied the chargesh the charges were denied by his familyexample 898 t after the war the city was briefly occupied by the allies and then was returned to the dutchh after the war the city was returned to the dutchexample 1952 t crude oil prices soared to record levelsh crude oil prices risethese examples show how deep semantic analy sis deals effectively with apposition activepassive alternation coordination and can integrate lexical knowledgethe rte dataset only contains a few inconsistent th pairseven although example 78 might look like a case in point it is not inconsistent it would be if the t in the example would have been clintons new book is not a big sellerthe addition of the adverb here makes th consistent33 background knowledgethe theorem prover needs background knowledge to support its proofsfinding a proof for example 1952 above is only possible if the theorem prover knows that soaring is a way of risinghow does it know thisbecause in addi tion to the information from t and h alone we also supply relevant background knowledge in the form of firstorder axiomsinstead of giving justfoldrs to the theorem prover we sup ply it with drs where bk is short for the relevant background knowledgewe generate background knowledge using threekinds of sources generic knowledge lexical knowl edge and geographical knowledgeaxioms forgeneric knowledge cover the semantics of possessives activepassive alternation and spatial knowledgethere are about 20 different axioms in the current system and these are the only manually gener ated axiomsan example is exyagentinin which states that if an event is located in y then so is the agent of that eventlexical knowledge is created automatically from wordneta hyponymy relation between two 630 synsets a and b is converted into xbtwo synset sisters a and b are translated into xbhere the predicate symbols from the drs are mapped to wordnet synsets using a variant of lesks wsd algorithm examples 78 and 1952 would be supported by knowledge similar to xperson xartifact xperson xrise finally axioms covering geographical knowledge about capitals countries and us states are extracted automatically from the cia factbookan example xyfrancein 34 model buildingwhile theorem provers are designed to prove that a formula is a theorem they are generally not good at deciding that a formula is not a theoremmodel builders are designed to show that a formula is true in at least one modelto exploit these complementary approaches to inference we use both a theorem prover and amodel builder for any inference problem the theo rem prover attempts to prove the input whereas the model builder simultaneously tries to find a model for the negation of the inputif the model builder finds a model for folfol we know that there cannot be a proof for its negation and if the model builder is able to generate a model for foldrs we know that t and h are consistent and vice versaanother attractive property of a model builder is that it outputs a model for its input formula a model is here the logical notion of a model describing a situation in which the input formula is trueformally a model is a pair df where d is the set of entities in thedomain and f a function mapping predicate sym bols to sets of domain membersfor instance the model returned for fol in example 78 is one where the domain consists of three entities d d1d2d3 f f d1d2 f f d3 f f f f f model builders like paradox generate finite mod els by iterationthey attempt to create a model for domain size 1if they fail they increase the domain size and try again until either they find a model ortheir resources run outthus although there are in finitely many models satisfying fol modelbuilders generally build a model with a minimal do main size35 approximating entailmentin an ideal world we calculate all the required back ground knowledge and by either finding a proof ora countermodel decide how t and h relate with re spect to entailmenthowever it is extremely hard to acquire all the required background knowledgethis is partly due to the limitations of word sense disambiguation the lack of resources like wordnet and the lack of general knowledge in a form suitable for automatic inference tasksto introduce an element of robustness into our ap proach we use the models as produced by the modelbuilders to measure the distancefrom an entail mentthe intuition behind it is as followsif his entailed by t the model for th is not informa tive compared to the one for t and hence does not introduce new entitiesput differently the domain size for th would equal the domain size of t incontrast if t does not entail h h normally intro duce some new information and this will be reflected in the domain size of th which then is larger than the domain size of t it turns out that this differencebetween the domain sizes is a useful way of measur ing the likelihood of entailmentlarge differences are mostly not entailments small differences mostly areconsider the following example 631 example 1049 t four venezuelan firefighters who were traveling to a training course in texas were killed when their sport utility vehicle drifted onto the shoulder of a highway and struck a parked truckh four firefighters were killed in a car accidentalthough this example is judged as a true entail ment vampire does not find a proof because it lacks the background knowledge that one way of causing acar accident is to drift onto the shoulder of the high way and strike somethingit generates a model withdomain size 11 for fol and a model with do main size 12 for foldrsthe absolute difference in domain sizes is small and thereforelikely to indicate an entailmentapart from the absolute difference we also compute the difference rel ative to the domain sizefor the example above the relative domain size yields 112 0083the domain size only tells us something about the number of entities used in a modelnot about the number of established relations between the models entitiestherefore we also introduce the notion ofmodel sizethe model size is defined here by count ing the number of all instances of twoplace relations in the model and multiplying this with the domain sizefor instance the following model d d1d2d3 f d1d2 f d3 f f has a domain size of 3 and 3 instantiated twoplace relations yielding a model size of 3 3 936 deep semantic featuresgiven our approach to deep semantic analysiswe identified eight features relevant for recognising textual entailmentthe theorem prover provides us with two features entailed determin ing whether t implies h and inconsistentdetermining whether t together with h is incon sistentthe model builder gives us six features domainsize and modelsize for th as well as the absolute and relative difference between the sizes of t and th both for the size of the domains and the size of the models there are not many test suites available for textual inferencewe use throughout this section the dataset made available as part of the rte challenge41 dataset design and evaluation measuresthe organisers released a development set of 567 sentence pairs and a test set of 800 sentence pairsin both sets 50 of the sentence pairs were anno tated as true and 50 as false leading to a 50 most frequent class baseline for automatic systemsthe examples are further distinguished according to the way they were designed via a socalled task variablefor examples marked cd sentences with high lexical overlap in comparable news articles were selected whereas thehypotheses of examples marked qa were formed by translating questions from eg trec into statementsthe other subsets are ie mt rc pp and ir the dif ferent examples and subsets cover a wide variety of different aspects of entailment from incorporationof background knowledge to lexical to syntactic en tailment and combinations of all thesefor a more exhaustive description of dataset design we refer the reader to 42 experiment 1 human upper boundto establish a human upper bound as well as inves tigate the validity of the datasets issued one of the authors annotated all 800 examples of the test set for entailment using the short rte annotation rulesthe annotation was performed before the release of the gold standard annotation for the test set and was therefore independent of the organisersannotationthe organisersand the authors annotation yielded a high percentage agreement of 9525however 33 of the originally created examples were alreadyfiltered out of the corpus before release by the organisers because of agreementrelated problemstherefore we expect that human agreement on textual en tailment in general is rather lower632 43 decision trees for entailment recognitionwe expressed each example pair as a feature vector using different subsets of the features described in section 2 and section 3 for each experimentwe then trained a decision tree for classification into true and false entailment on the development set using the weka machine learning tool and tested on the test setapartfrom a classification weka also computes a confi dence value for each decision dependent on the leaf in the tree that the classified example falls into if the leaf covers x examples in the training set of which y examples are classified wrongly then the error rate is yx and the confidence value is 1yx our evaluation measures are accuracy as the percentage of correct judgements as well asconfidenceweighted average score which rewards the systems ability to assign a higher confi dence score to correct judgements than wrong ones after the n judgements are sorted in decreasing order by their confidence value the following measure is computed cws 1 n ni1 correctupranki i all evaluation measures are computed over the whole test set as well as on the 7 different subsetsthe results are summarised in table 1we also computed precision recall and f measure for both classes true and false and will discuss the results in the text whenever of interestexperiment 2 shallow features in this experi ment only the shallow features were usedthe overall accuracy of 569 is significantly higher than the baseline2column 2 in table 1 shows that this decent per formance is entirely due to excellent performanceon the cd subsetin addition the method overes timates the number of true entailments achieving a recall of 0926 for the class true but a precision of only 0547 on the same classin contrast it has2we used the ztest for the difference between two propor tions to measure whether the difference in accuracy between two algorithms or an algorithm and the baseline is statistically significant at the 5 levelgood precision but low recall for thefalse classthus there is a correspondence be tween low word overlap and false examples high overlap however is normally necessary but not sufficient for true entailment experiment 3 strict entailment to test the potential of entailment as discovered by theorem prov ing alone we now use only the entailment and inconsistent featuresas to be expected the decision tree shows that if a proof for t implies h has been found the example should be classified as true otherwise as false3 the precision for the class true is reasonably high if a proof is found then an entailment is indeed very likelyhowever recall is very low as only 30 proofs were found on the test set this yields an fmeasure of only 010for the true classdue to the low recall the over all accuracy of the system is not significantly higher than the baselinethus this feature behaves in the opposite way to shallow lexical overlap and overgenerates the false classmissing lexical and background knowledge is the major because for missing proofsexperiment 4 approximating entailment as discussed in section 35 we now try to compensate for missing knowledge and improve recall for true entailments by approximating entailment with the features that are furnished by the model builderthus experiment 4 uses all eight deep semantic analysis features including the features capturing differences in domain and modelsizesthe recallfor the true class indeed jumps to 0735al though unavoidably the false class suffers the resulting overall accuracy is significantly higher than when using the features provided by the theorem prover alone the confidence weighted score also rises substantially from 0548 to 0608the approximation achieved can be seen in the differenttreatment of example 1049 in ex periments 3 and 4in experiment 3 this example 3the inconsistent feature was not used by the decision tree as very few examples were covered by that feature633 table 1 summary of results for experiments 1 to 6 exp 1 human 2 shallow 3 strict 4 deep 5 hybrid 6 hybridtask task acc cws acc cws acc cws acc cws acc cws acc cws cd 0967 na 0827 0881 0547 0617 0713 0787 0700 0790 0827 0827 ie 0975 na 0508 0503 0542 0622 0533 0616 0542 0639 0542 0627 mt 0900 na 0500 0515 0500 0436 0592 0596 0525 0512 0533 0581 qa 0961 na 0531 0557 0461 0422 0515 0419 0569 0520 0577 0531 rc 0979 na 0507 0502 0557 0638 0457 0537 0507 0587 0557 0644 pp 0920 na 0480 0467 0540 0581 0520 0616 0560 0667 0580 0619 ir 0922 na 0511 0561 0489 0421 0567 0503 0622 0569 0611 0561 all 0951 na 0569 0624 0520 0548 0562 0608 0577 0632 0612 0646 is wrongly classified as false as no proof can be found in experiment 4 it is correctly classified astrue due to the small difference between domain and modelsizes for t and th there is hardly any overall difference in accuracybetween the shallow and the deep classifierhow ever it seems that the shallow classifier in its currentform has very little potential outside of the cd subset whereas the deep classifier shows a more promis ing performance for several subsetsexperiment 5 hybrid classification as shallow and deep classifiers seem to perform differently on differently designed datasets we hypothesized that a combination of these classifiers should bring furtherimprovementexperiment 5 therefore used all shal low and deep features togetherhowever the overallperformance of this classifier is not significantly better than either of the separate classifierscloser inspection of the results reveals that in comparison to the shallow classifier the hybrid classifier performs better or equally on all subsets but cdin comparison to the deep classifier in column 4 the hybrid classifier performs equallywell or better on all subsets apart from mt overall this means more robust performance of the hy brid classifier over differently designed datasets and therefore more independence from dataset designexperiment 6 dependency on dataset designas eperiment 5 shows simple combination of methods while maybe more robust will not necessar ily raise overall performance if the system does notknow when to apply which methodto test this hy pothesis further we integrated the subset indicator as a feature with the values cd ie mt rc ir pp qa into our hybrid systemindeed the resulting overall accuracy is significantly better thaneither shallow or deep system alonenote that us ing both a combination of methodologies and thesubset indicator is necessary to improve on individ ual shallow and deep classifiers for this corpuswe integrated the subset indicator also into the shallowand deep classifier by themselves yielding classi fiers shallowtask and deeptask with no or only very small changes in accuracy our shallow analysis is similar to the idf models proposed by we have expanded their approach by us ing other shallow features regarding text lengththe basic idea of our deep analysis using a de tailed semantic analysis and firstorder inferencegoes back to it is similar to some of the recent approaches that were pro posed in the context of the pascal rte workshop ie using the otter theorem prover using epilog or abduction none of these systems however incorporate model building as a central part of the inference mechanismwe have shown that solely relying on theorem proving is normally insufficient due to low recall and that using model builders is a promising way to approximate entailmentresults of other approaches to determining tex tual entailment indicate that it is an extremely hard 634 taskthe aforementioned rte workshop revealed that participating systems reached accuracy figuresranging between 050 and 059 and cws scores between 050 and 069 com paring this with our own results shows how well our systems performs onthe same data setthis is partly due to our hy brid approach which is more robust across different datasetsrelying on theorem proving as a technique for de termining textual entailment yielded high precision but low recall due to a general lack of appropriate background knowledgewe used model building as an innovative technique to surmount this problem toa certain extentstill it will be unavoidable to incor porate automatic methods for knowledge acquisition to increase the performance of our approachfuture work will be directed to the acquisition of targeted paraphrases that can be converted into background knowledge in the form of axiomsour hybrid approach combines shallow analysis with both theorem proving and model building and achieves high accuracy scores on the rte dataset compared to other systems that we are aware ofthe results for this approach also indicate that the choice of entailment recognition methods might have to vary according to the dataset design andor application and that a method that wants to achieve robust performance across different datasetsmight need the integration of several different entail ment recognition methods as well as an indicator of design methodology or applicationthus although test suites establish a controlledway of assessing textual entailment detection sys tems the importance of being able to predict textual entailment in nlp might be better justified usingtaskbased evaluationthis can be achieved by in corporating them in qa or summarisation systemsacknowledgements we would like to thank mirella lapata and malvina nissim as well as three anonymous review ers for their comments on this paperwe are also grateful to valentin jijkoun and bonnie webber for discussion and steve clark and james curran for help on using the ccgparser
H05-1079
recognising textual entailment with logical inferencewe use logical inference techniques for recognising textual entailmentas the performance of theorem proving turns out to be highly dependent on not readily available background knowledge we incorporate model building a technique borrowed from automated reasoning and show that it is a useful robust method to approximate entailmentfinally we use machine learning to combine these deep semantic analysis techniques with simple shallow word overlap the resulting hybrid model achieves high accuracy on the rte test set given the state of the artour results also show that the different techniques that we employ perform very differently on some of the subsets of the rte corpus and as a result it is useful to use the nature of the dataset as a featureit is often the case that the lack of sufficient linguistic knowledge causes failure of inference thus the system outputs no entailment for almost all pairsour system is based on logical representation and automatic theorem proving but utilizes only wordnet as a lexical knowledge resource
a shortest path dependency kernel for relation extraction we present a novel approach to relation extraction based on the observation thatthe information required to assert a rela tionship between two named entities in the same sentence is typically capturedby the shortest path between the two entities in the dependency graph exper iments on extracting toplevel relationsfrom the ace newspaper corpus show that thenew shortest path dependency kernel outperforms a recent approach based on de pendency tree kernels one of the key tasks in natural language process ing is that of information extraction which istraditionally divided into three subproblems coref erence resolution named entity recognition and relation extractionconsequently ie corpora are typically annotated with information corresponding to these subtasks ace facilitating the development of sys tems that target only one or a subset of the threeproblemsin this paper we focus exclusively on extracting relations between predefined types of entities in the ace corpusreliably extracting relations between entities in naturallanguage docu ments is still a difficult unsolved problem whose inherent difficulty is compounded by the emergenceof new application domains with new types of nar rative that challenge systems developed for previouswellstudied domainsthe accuracy level of current syntactic and semantic parsers on natural lan guage text from different domains limit the extent to which syntactic and semantic information can be used in real ie systemsnevertheless various linesof work on relation extraction have shown experimentally that the use of automatically derived syntactic information can lead to significant improvements in extraction accuracythe amount of syntactic knowledge used in ie systems varies from part ofspeech only to chunking to shallow parse trees to dependency trees derived from full parse trees eventhough exhaustive experiments comparing the per formance of a relation extraction system based on these four levels of syntactic information are yet tobe conducted a reasonable assumption is that the extraction accuracy increases with the amount of syn tactic information usedthe performance howeverdepends not only on the amount of syntactic infor mation but also on the details of the exact modelsusing this informationtraining a machine learn ing system in a setting where the information usedfor representing the examples is only partially rele vant to the actual task often leads to overfittingit is therefore important to design the ie system so that the input data is stripped of unnecessary features as much as possiblein the case of the tree kernels from the authors reduce each relation example to the smallest subtree in the parse or dependency tree that includes both entitieswe will show in this paper that increased extraction performance can be 724 obtained by designing a kernel method that uses an even smaller part of the dependency structure theshortest path between the two entities in the undi rected version of the dependency graphlet e1 and e2 be two entities mentioned in the samesentence such that they are observed to be in a re lationship r ie are 1for example r can specify that entity e1 is located entity e2figure 1 shows two sample sentences from ace with entity mentions in boldcorrespondingly the first column in table 1 lists the four relations of typelocated that need to be extracted by the ie sys temwe assume that a relation is to be extractedonly between entities mentioned in the same sen tence and that the presence or absence of a relation is independent of the text preceding or following the sentencethis means that only information derived from the sentence including the two entities will be relevant for relation extractionfurthermore with each sentence we associate its dependency graphwith words figured as nodes and wordword dependencies figured as directed edges as shown in fig ure 1a subset of these wordword dependencies capture the predicateargument relations present inthe sentencearguments are connected to their target predicates either directly through an arc point ing to the predicate or indirectly through a preposition or infinitive particle other types of wordword dependen cies account for modifierhead relationships present in adjectivenoun compounds nounnoun compounds or adverbverb constructions in figure 1 we show the full dependency graphs for two sentences from the ace newspaper corpuswordword dependencies are typically catego rized in two classes as followslocal dependencies these correspond to local predicateargument constructions such as troops raided or pump ing stationsin figure 1nonlocal dependencies longdistance dependencies arise due to various linguistic con structions such as coordination extractionraising and controlin figure 1 among non local dependencies are troops warning or ministers preachinga context free grammar parser can be used to extract local dependencies which for each sentence form a dependency treemildly contextsensitive formalisms such as combinatory categorial grammar model word word dependencies more directly and can be used to extract both local and longdistance dependencies giving rise to a directed acyclic graph as illustrated in figure 1if e1 and e2 are two entities mentioned in the samesentence such that they are observed to be in a relationship r our hypothesis stipulates that the contribution of the sentence dependency graph to establishing the relationship r is almost exclu sively concentrated in the shortest path between e1 and e2 in the undirected version of the dependency graphif entities e1 and e2 are arguments of the same predicate then the shortest path between them willpass through the predicate which may be con nected directly to the two entities or indirectly through prepositionsif e1 and e2 belong to different predicateargument structures that share a common argument then the shortest path will pass through this argumentthis is the case with the shortest pathbetween stationsand workersin figure 1 passing through protesters which is an argument com mon to both predicates holdingand seizedin table 1 we show the paths corresponding to the four relation instances encoded in the ace corpus for thetwo sentences from figure 1all these paths sup port the located relationshipfor the first path it is reasonable to infer that if a person entity is doing some action to a facility entity then the personentity is located at that facility entitythe sec ond path captures the fact that the same person entity is doing two actions one action to a person entity and the other action to a facil ity entity a reasonable inference in this case is that the workersare located at the 725 s1 s2 protesters stations workers troops churches ministers seized several pumping holding 127 she will hostage recently have raided warning to stop preaching figure 1 sentences as dependency graphsrelation instance shortest path in undirected dependency graph s1 protesters at stations protesters seized stations s1 workers at stations workers holding protesters seized stations s2 troops at churches troops raided churches s2 ministers at churches ministers warning troops raided churches table 1 shortest path representation of relationsstationin figure 2 we show three more examples of the located relationship as dependency pathscreated from one or two predicateargument struc turesthe second example is an interesting caseas it illustrates how annotation decisions are accom modated in our approachusing a reasoning similarwith that from the previous paragraph it is reason able to infer that troopsare located in vans and that vansare located in cityhowever because vansis not an ace markable it cannot participate in an annotated relationshiptherefore troopsis annotated as being located in citywhich makes sense due to the transitivity of the relation locatedin our approach this leads to shortest paths that pass through two or more predicate argument structuresthe last relation example is a case where there ex ist multiple shortest paths in the dependency graph between the same two entities there are actually two different paths with each path replicated into three similar paths due to coordinationour current approach considers only one of the shortest paths nevertheless it seems reasonable to investigate usingall of them as multiple sources of evidence for rela tion extractionthere may be cases where e1 and e2 belongto predicateargument structures that have no argument in commonhowever because the dependency graph is always connected we are guaranteed to find a shortest path between the two enti tiesin general we shall find a shortest sequence of predicateargument structures with target predicates p1 p2 pn such that e1 is an argument of p1 e2 isan argument of pn and any two consecutive predi cates pi and pi1 share a common argument the shortest path between two entities in a depen dency graph offers a very condensed representationof the information needed to assess their relationshipa dependency path is represented as a sequence of words interspersed with arrows that in 726 he had no regrets for his actions in brckohisactionsinbrcko yous troops today acted for the first time to capture an alleged bosnian war criminal rushing from unmarked vans parked in the northern serbdominated city of bijeljinatroopsrushingfromvansparkedincity jelisic created an atmosphere of terror at the camp by killing abusing and threatening the detaineesdetaineeskillingjelisiccreatedatcamp detaineesabusingjelisiccreatedatcamp detaineesthreatningjelisiccreatedatcamp detaineeskillingbycreatedatcamp detaineesabusingbycreatedatcamp detaineesthreateningbycreatedatcamp figure 2 relation examplesdicate the orientation of each dependency as illustrated in table 1these paths however are completely lexicalized and consequently their performance will be limited by data sparsitywe can al leviate this by categorizing words into classes with varying degrees of generality and then allowing paths to use both words and their classesexamples of word classes are partofspeech tags and generalizations over pos tags such as noun active verb or passive verbthe entity type is also used forthe two ends of the dependency pathother poten tially useful classes might be created by associatingwith each noun or verb a set of hypernyms corre sponding to their synsets in wordnetthe set of features can then be defined as acartesian product over these word classes as illus trated in figure 3 for the dependency path betweenprotestersand stationin sentence s1in this rep resentation sparse or contiguous subsequences of nodes along the lexicalized dependency path are included as features simply byreplacing the rest of the nodes with their correspond ing generalizationsthe total number of features generated by this de pendency path is 41314 and some of them are listed in table 2protesters nns noun person seized vbd verb stations nns noun facility figure 3 feature generation from dependency pathprotesters seized stations noun verb noun person seized facility person verb facility table 2 sample featuresfor verbs and nouns occurring along a dependency path we also use an additional suffix to indicate a negative polarity itemin the case of verbs this suffix is usedwhen the verb is modi fied by a negative polarity adverb such as notor nevernouns get the negative suffix whenever they are modified by negative determiners such as no neitheror norfor example the phrase henever went to parisis associated with the depen dency path he went to parisexplicitly creating for each relation example avector with a position for each dependency path fea ture is infeasible due to the high dimensionality ofthe feature spacehere we can exploit dual learning algorithms that process examples only via computing their dotproducts such as the support vec tor machines these dotproducts be tween feature vectors can be efficiently computed through a kernel function without iterating over allthe corresponding featuresgiven the kernel func tion the svm learner tries to find a hyperplane that separates positive from negative examples and at thesame time maximizes the separation be tween themthis type of maxmargin separator hasbeen shown both theoretically and empirically to re sist overfitting and to provide good generalization performance on unseen examplescomputing the dotproduct between two relation examples amounts to calculating the 727 number of common features of the type illustrated in table 2if x x1x2xm and y y1y2yn are two relation examples where xi denotes the set ofword classes corresponding to position i then the number of common features between x and y is computed as in equation 1k 0 m 6 n n i1 c m n where c xiyi is the number of common word classes between xi and yithis is a simple kernel whose computation takes o timeif the two paths have different lengthsthey correspond to different ways of expressing a re lationship for instance they may pass through a different number of predicate argument structuresconsequently the kernel is defined to be 0 in this caseotherwise it is the product of the number of common word classes at each position in the two pathsas an example let us consider two instances of the located relationship 1his actions in brcko and 2his arrival in beijingtheir corresponding dependency paths are 1his actions in brcko and 2his arrival in beijingtheir representation as a sequence of sets of word classes is given by 1x x1 x2 x3 x4 x5 x6 x7 where x1 his prp person x2 x3 actions nns noun x4 x5 in in x6 x7 brcko nnp noun location 2y y1 y2 y3 y4 y5 y6 y7 where y1 his prp person y2 y3 arrival nn noun y4 y5 in in y6 y7 beijing nnp noun location based on the formula from equation 1 the kernel is computed as k 3111213 18we use this relation kernel in conjunction with svms in order to find decision hyperplanes that best separate positive examples from negative exampleswe modified the libsvm1 package for svm learn ing by plugging in the kernel described above and used its default oneagainstone implementation for multiclass classificationwe applied the shortest path dependency kernel to the problem of extracting toplevel relations from the ace corpus the version used for the september 2002 evaluationthe training part of this dataset consists of 422 documents witha separate set of 97 documents allocated for test ingthis version of the ace corpus contains three types of annotations coreference named entities and relationsentities can be of the type personorganization facility location and geo political entitythere are 5 general toplevelrelations role part located near and socialthe role relation links people to an organization to which they belong own founded or provide some servicethe part relation indicates subset relationships such as a state to a nation or a subsidiary to its parent companythe at relation indi cates the location of a person or organization at somelocationthe near relation indicates the proximity of one location to anotherthe social relation links two people in personal familial or profes sional relationshipseach toplevel relation type is further subdivided into more finegrained subtypesresulting in a total of 24 relation typesfor exam ple the located relation includes subtypes such as locatedat basedin and residencein total there are 7646 intrasentential relations of which 6156 are in the training data and 1490 in the test datawe assume that the entities and their labels areknownall preprocessing steps sentence segmentation tokenization and pos tagging were per formed using the opennlp2 package51 extracting dependencies using a ccgparser ccg is a typedriven theory of grammar where most languagespecific aspects ofthe grammar are specified into lexiconto each lex 1urlhttpwwwcsientuedutwcjlinlibsvm 2url httpopennlpsourceforgenet 728 ical item corresponds a set of syntactic categories specifying its valency and the directionality of itsargumentsfor example the words from the sen tence protesters seized several stationsare mapped in the lexicon to the following categories protesters np seized np several npnp stations np the transitive verb seizedexpects two arguments a noun phrase to the right and another noun phrase to the left similarly the adjective severalexpects a noun phrase to its rightdepending on whether its valency is greater than zero or not a syntactic category is called a functor or an argumentin the example above seizedandseveralare functors while protestersand sta tionsare argumentssyntactic categories are combined using a smallset of typed combinatory rules such as functional ap plication composition and type raisingin table 3we show a sample derivation based on three func tional applicationsprotesters seized several stations np np npnp np np np np np snp s table 3 sample derivationin order to obtain ccg derivations for all sen tences in the ace corpus we used the ccg parser introduced in 3this parser also outputs a list of dependen cies with each dependency represented as a 4tuple f a wf wa where f is the syntactic category of the functor a is the argument number wf is the head word of the functor and wa is the head word of theargumentfor example the three functional appli cations from table 3 result in the functorargument dependencies enumerated below in table 43urlhttpwwwircsupennedujuliahrparser f a wf wa npnp 1 severalstationsnp 2 seizedstationsnp 1 seizedprotesterstable 4 sample dependenciesbecause predicates and adjunctsare always represented as functors while complements are always represented as arguments it isstraightforward to transform a functorargument de pendency into a headmodifier dependencythe headmodifier dependencies corresponding to the three functorargument dependencies in table 4 areprotesters seized stations seized and sev eral stationsspecial syntactic categories are assigned in ccgto lexical items that project unbounded dependen cies such as the relative pronouns who whichand thatcoupled with a headpassing mechanism these categories allow the extraction of longrange dependenciestogether with the local wordworddependencies they create a directed acyclic depen dency graph for each parsed sentence as shown in figure 152 extracting dependencies using a cfgparser local dependencies can be extracted from a cfg parse tree using simple heuristic rules for findingthe head child for each type of constituentalter natively headmodifier dependencies can be directly output by a parser whose model is based on lexical dependenciesin our experiments we used the full parse output from collinsparser in which every nonterminal node is already annotated with head informationbecause local dependencies assemble into a tree for each sentence there is onlyone path between any two entities in a de pendency tree53 experimental resultsa recent approach to extracting relations is described in the au thors use a generalized version of the tree kernel from to compute a kernel over 729 relation examples where a relation example consists of the smallest dependency tree containing the two entities of the relationprecision and recall values are reported for the task of extracting the 5 toplevelrelations in the ace corpus under two different sce narios s1 this is the classic setting one multiclasssvm is learned to discriminate among the 5 top level classes plus one more class for the norelation casess2 because of the highly skewed data distribution the recall of the svm approach in the first sce nario is very lowin the authors propose doing relation extraction in twosteps first one binary svm is trained for relation detection which means that all positive rela tion instances are combined into one classthen the thresholded output of this binary classifier is used as training data for a second multiclass svm which is trained for relation classificationthe same kernel is used in both stageswe present in table 5 the performance of our shortest path dependency kernel on the task ofrelation extraction from ace where the dependencies are extracted using either a ccg parser or a cfg parser we also show the results presented in for their best performing kernel k4 under both scenariosmethod precision recall fmeasure spccg 675 372 480 spcfg 711 392 505 k4 703 263 380 spccg 637 414 502 spcfg 655 438 525 k4 671 350 458 table 5 extraction performance on acethe shortestpath dependency kernels outperform the dependency kernel from in both scenarios with a more significant dif ference for spcfgan error analysis revealed thatcollinsparser was better at capturing local depen dencies hence the increased accuracy of spcfganother advantage of our shortestpath dependency kernels is that their training and testing are very fast this is due to representing the sentence as a chainof dependencies on which a fast kernel can be com putedall the four sp kernels from table 5 take between 2 and 3 hours to train and test on a 26ghz pentium iv machineto avoid numerical problems we constrained the dependency paths to pass through at most 10 words by setting the kernel to 0 for longer pathswe also tried the alterna tive solution of normalizing the kernel however this led to a slight decrease in accuracyhaving longer paths give larger kernel scores in the unnormalizedversion does not pose a problem because by definition paths of different lengths correspond to disjoint sets of featuresconsequently the svm algorithm will induce lower weights for features occur ring in longer paths resulting in a linear separator that works irrespective of the size of the dependency pathsin the authors do relation extraction using a tree kernel defined over shallow parse tree representations of sentencesthe same tree kernel is slightly generalized in and used in conjunction with dependency treesin both approaches a relation in stance is defined to be the smallest subtree in the parse or dependency tree that includes both entitiesin this paper we argued that the information relevant to relation extraction is almost entirely concentrated in the shortest path in the dependency tree leading to an even smaller representationanother difference between the tree kernels above and our new kernel is that the tree kernels used for relation extraction are opaque ie the semantics of the dimensions in the corresponding hilbert space is not obviousfor the shortestpath kernels the semantics is known bydefinition each path feature corresponds to a dimen sion in the hilbert spacethis transparency allows us to easily restrict the types of patterns counted by the kernel to types that we deem relevant for relationextractionthe tree kernels are also more time con suming especially in the sparse setting where they count sparse subsequences of children common to nodes in the two treesin the 730 tree kernel is computed in o time where m and n are the number of nodes in the two treesthis changes to o in the sparse settingour shortestpath intuition bears some similar ity with the underlying assumption of the relational pathfinding algorithm from in most relational domains important con cepts will be represented by a small number of fixedpaths among the constants defining a positive instance for example the grandparent relation is de fined by a single fixed path consisting of two parent relationswe can see this happening also in the task of relation extraction from ace where importantconceptsare the 5 types of relations and the con stantsdefining a positive instance are the 5 types of entitieslocal and nonlocal dependencies are equally important for finding relationsin this paper we tried extracting both types of dependencies using a ccg parser however another approach is to recover deepdependencies from syntactic parses as in this mayhave the advantage of preserving the quality of local dependencies while completing the representa tion with nonlocal dependenciescurrently the method assumes that the named entities are knowna natural extension is to automati cally extract both the entities and their relationshipsrecent research indicates thatintegrating entity recognition with relation extraction in a global model that captures the mutual influ ences between the two tasks can lead to significant improvements in accuracywe have presented a new kernel for relation extraction based on the shortestpath between the two rela tion entities in the dependency graphcomparative experiments on extracting toplevel relations from the ace corpus show significant improvements over a recent dependency tree kernelthis work was supported by grants iis0117308 and iis0325116 from the nsf
H05-1091
a shortest path dependency kernel for relation extractionwe present a novel approach to relation extraction based on the observation that the information required to assert a relationship between two named entities in the same sentence is typically captured by the shortest path between the two entities in the dependency graphexperiments on extracting toplevel relations from the ace newspaper corpus show that the new shortest path dependency kernel outperforms a recent approach based on dependency tree kernelsthis work on relation extraction shows that the shortest dependency path between any two entities captures the in formation required to assert a relationship between them
syntax annotation for the genia corpus linguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing tools for bio textmining as the focus of information extraction is shifting from nominal information such as named entity to verbal information such as function and interaction of substances applica tion of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sen tences is in demand a subset of the genia corpus consisting of 500 medline abstracts has been annotated for syntactic structure in an xml based format based on penn treebank ii scheme interannotator agreement test indicated that the writ ing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation and that annotation can be stably done by linguists without much knowledge of bi ology with appropriate guidelines regarding to linguistic phenomena par ticular to scientific texts research and development for information extraction from biomedical literature has been rapidly advancing due to demands caused by information overload in the genomerelated fieldnatural language process ing techniques have been regarded as useful for this purposenow that focus of in formation extraction is shifting from extraction of nominalinformation such as named entity to verbalinformation such as relations of enti ties including events and functions syntactic analysis is an important issue of nlp application in biomedical domainin extraction of rela tion the roles of entities participating in the relation must be identified along with the verb that represents the relation itselfin text analysis this corresponds to identifying the subjects ob jects and other arguments of the verbthough rulebased relation information ex traction systems using surface pattern matching andor shallow parsing can achieve high precision in a particular target domain they tend to suffer from low recall due to the wide variation of the surface ex pression that describe a relation between a verb and its argumentsin addition the portability of such systems is low because the system has to be reequipped with different set of rules when different kind of relation is to be extractedone solution to this problem is using deep parsers which can abstract the syntactic variation of a relation between a verb and its arguments repre sented in the text and constructing extraction rule on the abstract predicateargument structureto do so widecoverage and highprecision parsers are requiredwhile basic nlp techniques are relatively general and portable from domain to domain customization and tuning are inevitable especially in order to apply the techniques effec tively to highly specialized literatures such as research papers and abstractsas recent advances in nlp technology depend on machine learning techniques annotated corpora from which system can acquire rules are indispensable 220 resources for customizing generalpurpose nlp toolsin biotextmining for example training on partofspeech annotated genia cor pus was reported to improve the accuracy of junk tagger from 835 to 981 on medline abstracts and the framed corpus was used to train tnt tagger on german to improve its accuracy from 957 to 98 on clinical reports and other biomedical textscorpus annotated for syntactic structures is expected to play a similar role in tuning parsers to biomedical domain ie similar improve ment on the performance of parsers is expected by using domainspecific treebank as a resource for learningfor this purpose we construct gena treebank a treebank on research abstracts in biomedical domainthe base text of gtb is that of the genia cor pus constructed at university of tokyo which is a collection of research ab stracts selected from the search results of medline database with keywords human blood cells and transcription factorsin the genia corpus the abstracts are en coded in an xml scheme where each abstract is numbered with medline uid and contains title and abstractthe text of title and abstract is segmented into sentences in which biological terms are annotated with their semantic classesthe genia corpus is also annotated for partof speech and coreference is also annotated in a part of the genia corpus by medco project at institute for infocomm research singapore gtb is the addition of syntactic information to the genia corpusby annotating various linguistic information on a same set of text the genia corpus will be a resource not only for individual purpose such as named entity extrac tion or training parsers but also for integrated systems such as information extraction using deep linguistic analysissimilar attempt of con structing integrated corpora is being done in university of pennsylvania where a corpus of medline abstracts in cyp450 and oncology domains where annotated for named entities pos and tree structure of sentences 21 annotation schemethe annotation scheme basically follows the penn treebank ii scheme encoded in xmla nonnull constituent is marked as an element with its syntactic cate gory used as tagsa null constitu ent is marked as a childless element whose tag corresponds to its categoriesother function tags are encoded as attributesfigure 1 shows an ex ample of annotated sentence in xml and the corresponding ptb notationthe label s means sentence npnoun phrase ppprepositional phrase and vpverb phrasethe label npsbjmeans that the element is an np that serves as the subject of the sentencea null element the trace of the object of stud iedmoved by passivization is denoted by in xml and 55in ptb notationthe number 55which refers to the identifier of the moved ele ment is denoted by id and refattributes in xml and is denoted as a part of a label in ptbin addition to changing the encoding we made some modifications to the schemefirst analysis within the noun phrase is simplifiedsecond semantic division of adverbial phrases such as tmp and mnr are not used adverbial constituents other than advp or ppused ad verbially are marked with adv tags but not with semantic tagsthird a coordination struc ture is explicitly marked with the attribute syncoodwhereas in the original ptb scheme it is not marked as suchin our gtb scheme nx and nac of the ptb scheme are not useda noun phrase is gen erally left unstructuredthis is mainly in order to simplify the process of annotationin case of biomedical abstracts long noun phrases often involve multiword technical terms whose syn tactic structure is difficult to determine without deep domain knowledgehowever the structure of noun phrases are usually independent of the structure outside the phrase so that it would be 221 easier to analyze the phrases involving such terms independently and later merge the two analysis togetherthus we have decided that we leave noun phrases unstructured in gtb annotation unless their analy sis is necessary for determining the structure outside the phraseone of the exception is the cases that involves coordination where it is nec essary to explicitly mark up the coordinated constituentsin addition we have added special attributes txterr unsure and commentfor later inspectionthe txterris used when the annotator suspects that there is a grammatical error in the original text the unsureattribute is used when the annotator is not confident and the commentis used for free comments by the annotator22 annotation processthe sentences in the titles and abstracts of the base text of genia corpus are annotated manu ally using an xml editor used for the global document annotation project although the sentence boundaries were adopted from the corpus the tree structure annotation was done independently of pos and term an notation already done on the genia corpusthe annotator was a japanese nonbiologist who has previously involved in the pos annotation of the genia corpus and accustomed to the style of research abstracts in englishmanually annotated abstracts are automatically converted to the ptb format merged with the pos annota tion of the genia corpus so far 500 abstracts are annotated and converted to the merged ptb formatin the merg ing process we found several annotation errorsthe 500 abstracts with correction of these errors are made publicly available as the genia treebank beta versionfor further cleanup we also tried to parse the corpus by the enju parser and identify the error of the corpus by investigating into the parse errorsenju is an hpsg parser that can be trained with ptbtype corpora which is reported to have 87 accuracy on wall street journal portion of penn treebank corpuscurrently the accuracy of the parser drops down to 82 on gtbbeta and although proper quantitative analysis is yet to be done it was found that the mismatches between labels of the treebank and the genia pos corpus are a major source of parse errorthe cor rection is complicated because several errors in the genia pos corpus were found in this cleaningup processwhen the cleaningup process is done we will make the corpus pub licly available as the proper releasein the present paper the binding of a 125ilabeled aldosterone derivative to plasma membrane rich fractions of hml was studied we have also checked interannotator agreementalthough the ptb scheme is popular among natural language processing society applicabil ity of the scheme to highly specialized text such as research abstract is yet to be discussedespe cially when the annotation is done by linguists lack of domain knowledge might decrease the stability and accuracy of annotationa small part of the base text set was annotated by another annotatorthe 10 abstracts were chosen randomly had 6 to 17 sentences per abstract the new annotator had a similar background as the first annotator that she is a japanese non biologist who has experiences in translation of figure 1the sentence in the present paper the binding of a 125ilabeled aldosterone derivative to plasma mem brane rich fractions of hml was studiedannotated in xml and ptb formats222 technical documents in english and in corpus annotation of english textsthe two results were examined manually and there were 131 disagreementsalmost every sentence had at least one disagreementwe have made the gold standardfrom the two sets of abstracts by resolving the disagreements and the accuracy of the annotators against this gold standard were 967 for the first annotator and 974 for the second annotatorof the disagreement the most prominent were the cases involving coordination espe cially the ones with ellipsisfor example one annotator annotated the phrase il1 and il18 mediated functionas in figure 2a the other annotated as figure 2bsuch problem is addressed in the ptb guideline and both formats are allowed as alter nativesas coordination with ellipsis occurs rather frequently in research abstracts this kind of phenomena has higher effect on decrease of the agreement rate than in penn treebankof the 131 disagreements 25 were on this type of coordinationanother source of disagreement is the at tachment of modifiers such as prepositional phrases and pronominal adjectiveshowever most are benign ambiguitywhere the difference of the structure does not affect on interpre tation such as high expression of stat in monocyteswhere the prepositional phrase in monocytescan attach to expressionor statwithout much difference in meaning and is augmented when the sensitizing tumor is a genetically modified variantwhere the whclause can attach to is augmentedor aug mentedwithout changing the meaningthe ptb guideline states that the modifier should be attached at the higher level in the former case and at the lower case in the latterin the annota tion results one annotator consistently attached the modifiers in both cases at the higher level and the other consistently at the lower level in dicating that the problem is in understanding the scheme rather than understanding the sentenceonly 15 cases were true ambiguities that needed knowledge of biology to solve in which 5 in volved coordination although the number was small there were disagreements on how to annotate a mathematical formula such as n2embedded in the sen tence since mathematical formulae were outside the scope of the original ptb schemeone annotator annotated this kind of phrase consis tently as a phrase with as an adjective the other annotated as phrase with as a verbthere were 6 such casesanother disagreement particular to abstracts is a treatment of labeled sentencesthere were 8 sentences in two ab stracts where there is a label like backgroundone annotator included the colon in the la bel while the other did notyet another is that one regarded the phrase author et al as coor dination and the other regarded et al as a modifier il1 and il18mediated function other disagreements are more general type such as regarding edform of a verb as an ad jective or a participle miscellaneous errors such as omission of a subtype of label or the position of tags il1 and il18mediated function np adjp function adjp and adjp il1 il18 mediated figure 2aannotation of a coordinated phrase by the first annotatora denotes a null constituent np np and np adjp 20 il18 meidiated np il1 function20 figure 2bannotation of the same phrase as in figure 2a by the second annotatora denotes a null constituent and 20denotes coindexing223 with regards to for the inserted phrase or the errors which look like just carelesssuch dis agreements and mistakes are at least partially eliminated when reliable taggers and parsers are available for preprocessingthe result of the interannotator agreement test indicates that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotationcon trary to the expectation that the lack of domain knowledge causes a problem in annotation on attachments of modifiers the number of cases where annotation of modifier attachment needs domain knowledge is smallthis indicates that linguists can annotate most of syntactic structure without an expert level of domain knowledgea major source of difficulty is coordination especially the ones involving ellipsiscoordination is reported to be difficult phenomena in an notation of different levels in the genia corpus in addition to the fact that this is the major source of interannotator agreement the annotator often commented the coordinated structure as unsurethe problem of coordination can be divided into two with different nature one is that the annota tion policy is still not wellestablished for the coordination involving ellipsis and the other is an ambiguity when the coordinated phrase has modifierssyntax annotation of coordination with ellipsis is difficult in general but the more so in an notation of abstracts than in the case of general texts because in abstracts authors tend to pack information in limited number of wordsthe ptb guideline dedicates a long section for this phenomena and allows alternatives in annotation but there are still cases which are not well covered by the schemefor example in addition to the disagreement the phrase illustrated in figure 2a and figure 2b shows another problem of the annotation schemeboth annotators fail to indicate that it is mediatedthat was to be after il1because there is no mechanism of coindexing a null element with a part of a tokenthis problem of ellipsis can frequently occur in research abstracts and it can be argued that the tokenization criteria must be changed for texts in biomedical domain so that such fragment as il18and mediatedin il18ediatedshould be regarede as separate tokensthe pennsylvania biology corpus partially solves this problem by separating a token where two or more subtokens are connected with hyphens but in the cases where a shared part of the word is not separated by a hyphen the word including the part is left uncutthe current gtb follows the genia corpus that it retains the tokeniza tion criteria of the original penn treebank but this must be reconsidered in futurefor analysis of coordination with ellipsis if the information on full forms is available one strategy would be to leave the inside structure of coordination unannotated in the treebank corpus and later merge it with the coordination structure annotationthe genia term corpus annotates the full form of a techni cal term whose part is omitted in the surface as an attribute of the element indicating a technical term in the above mentioned pennsylvania corpus a similar mechanism is used for recovering the full form of named entitieshowever in both corpora no such information is available outside the termsentitiesthe cases where scope of modification in coordinated phrases is problematic are few but they are more difficult in abstracts than in gen eral texts because the resolution of ambiguity needs domain knowledgeif termentity annota tion is already done that information can help resolve this type of ambiguity but again the problem is that outside the termsentities such information is not availableit would be practi cal to have the structure flat but specially marked when the tree annotators are unsure and have a domain expert resolve the ambiguity as the sentences that needs such intervention seems fewsome cases of ambiguity in modifier at tachment can be solved with similar processwe believe that other type of disagreements can be solved with supplementing criteria for linguistic phenomena not wellcovered by the scheme and annotator trainingautomatic pre processing by pos taggers and parsers can also help increase the consistent annotation224a subset of the genia corpus is annotated for syntactic structureinterannotator agreement test indicated that the annotation can be done stably by linguists without much knowledge in biology provided that proper guideline is established for linguistic phenomena particular to scientific research abstractswe have made the 500abstract corpus in both xml and ptb formats and made it publicly available as the genia treebank beta versionwe are in further cleaning up process of the 500abstract set and at the same time initial annotation of the remaining abstracts is being done so that the full genia set of 2000 ab stracts will be annotated with tree structurefor parsers to be useful for information ex traction they have to establish a map between syntactic structure and more semantic predicate argument structure and between the linguistic predicateargument structures to the factual relation to be extractedannotation of various in formation on a same set of text can help establish these mapsfor the factual relations we are annotating relations between proteins and genes in cooperation with a group of biologistsfor predicateargument annotation we are in vestigating the use of the parse results of the enju parseracknowledgments the authors are grateful to annotators and colleagues that helped the construction of the corpusthis work is partially supported by grant inaid for scientific research on priority area c genome information sciencefrom the min istry of education culture sports science and technology of japan
I05-2038
syntax annotation for the genia corpuslinguistically annotated corpus based on texts in biomedical domain has been constructed to tune natural language processing tools for biotextminingas the focus of information extraction is shifting from nominal information such as named entity to verbal information such as function and interaction of substances application of parsers has become one of the key technologies and thus the corpus annotated for syntactic structure of sentences is in demanda subset of the genia corpus consisting of 500 medline abstracts has been annotated for syntactic structure in an xmlbased format based on penn treebank ii schemeinterannotator agreement test indicated that the writing style rather than the contents of the research abstracts is the source of the difficulty in tree annotation and that annotation can be stably done by linguists without much knowledge of biology with appropriate guidelines regarding to linguistic phenomena particular to scientific textsour genia treebank corpus is estimated to have no imperative sentences and only seven interrogative sentences
the second international chinese word segmentation bakeoff the second international chinese word segmentation bakeoff was held in the summer of 2005 to evaluate the current state of the art in word segmentationtwenty three groups submitted 130 result sets over two tracks and four different corpora we found that the technol ogy has improved over the intervening two years though the outofvocabularyproblem is still or paramount impor tance chinese is written without interword spaces so finding wordboundaries is an essential first stepin many natural language processing applications including mono and crosslingual infor mation retrieval and texttospeech systemsthis word segmentation problem has been active areaof research in computational linguistics for almost two decades and is a topic of active re search around the worldas the very notion of wordhoodin chinese is hotly debated so thedetermination of the correct division of a chi nese sentence into wordscan be very complexin 2003 sighan the special interest group for chinese language processing of the association for computational linguistics conducted the first international chineseword segmentation bakeoff that competition was the first con ducted outside of china and has become the benchmark with which researchers evaluate their segmentation systemsduring the winter of 2004 it was decided to hold a second evaluation to determine how the latest research has affected segmentation technology2details of the contest 21the corpora four corpora were used in the evaluation two each using simplified and traditional chinese characters1 the simplified chinese corporawere provided by beijing university and micro soft research beijingthe traditional chinese corpora were provided by academia sinica in taiwan and the city university of hong kongeach provider supplied separate training andtruth data setsdetails on each corpus are pro vided in table1with one exception all of the corpora wereprovided in a single character encodingwe de cided to provide all of the data in both unicode and the standard encoding used in each localethis would allow systemsthat use one or the other encoding to chose appropriately while ensuring consistent transcoding across all sitesthis conversion was prob lematic in two cases 1the academia sinica corpus providedin unicode contained char acters found in big five plus that are not found in microsofts cp950 or standard big fiveit also contained compatibility characters that led to transcoding errors when converting from unicode to big five plusa detailed description of these issues can be found on the bakeoff 2005 1 a fifth corpus was provided by the university of pennsylvania but for numerous technical reasons it was not used in the evaluationhowever it has been made available on the sighan website along with the other corpora123 pages on the sighan websitethe data also included 11 instances of an invalid character that could not be converted to big five pluswas initially supplied in big five hkscswe initially converted this tounicode but found that there were char acters appearing in unicode ideograph extension b which many systems are unable to handlecity university was gracious enough to provide unicode versions for their files with all characters in the unicode bmpspecific details can be found on the bakeoff 2005 pages of the sighan websitethe truth data was provided in segmented and unsegmented form by all of the providers except academia sinica who only provided the segmented truth filesthese were converted to unsegmented form using a simple perl scriptunfortunately this script also removed spaces separating nonchinese tokenswe had no expectation of correct segmentationon nonchinese text so the spaces were manu ally removed between nonchinese text in the truth data prior to scoringthe academia sinica data separated tokensin both the training and truth data using a full width space instead of one or more halfwidth spacesthe scoring script was modified to ignore the type of space used so that teams would not be penalized during scoring for using a different separatorthe segmentation standard used by each provider were made available to the participantsthough late in the training periodthese stan dards are either extremely terse verbose but in chinese only or are verbose and moderately bilingualthe pku corpus uses a standard derived from gb 13715 the chinese government standard for text segmentation incomputer applicationssimilarly as uses a tai wanese national standard for segmentation incomputer applicationsthe cityu data was seg mented using the livac corpus standard and the msr data to microsoft internal standardthe standards are available on the bakeoff web sitethe pku data was edited by the organizers to remove a numeric identifier from the start of each lineunless otherwise noted in this paper no changes beyond transcoding were made to the data furnished by contributors22rules and procedures the bakeoff was run almost identically to the first described in sproat and emerson the detailed instructions provided to the partici pants are available on the bakeoff website at httpwwwsighanorgbakeoff2005 groups interested in participating in the competition registered on the sighan websiteonly the pri mary researcher for each group was asked to registerregistration was opened on june 1 corpus abbrevencodings training size test size academia sinica as big five plus unicode 545m 141k 122k 19k beijing university pk cp936 unicode 11m 55k 104k 13k city university of hong kong cityu big fivehkscs unicode 146m 69k 41k 9k microsoft research msr cp936 unicode 237m 88k 107k 13k table 1corpus information 124 2005 and allowed to continue through the time the training data was made available on july 11when a site registered they selected which cor pus or corpora there were interested in using and whether they would take part in the open or closed tracks on july 11 the training data was made available on the bakeoff website for downloading the same data was used regardless of the tracks the sites registered forthe web site did not allow a participant to id site contact country as pku cityu msr 2 icl beijing university wuguang shi zh 4 itnlp lab harbin institute oftechnology wei jiang zh 5 france telecom rd beijing heng li zh 6 information retrieval lab harbininstitute of technology huipeng zhang zh 7 dept of linguistics the universityof hong kong guohong fu hk 8 computer science dept xiamenuniversity hualin zeng zh 9 dept of linguistics the ohio stateuniversity xiaofei lu us 12 dept of computer science theuniversity of sheffield yaoyong li gb 13 nanjing university jiajun chen zh 14 stanford nl group huihsin tseng us 15 nara institute of science and technology masayuki asahara jp 16 academia sinica yufang tsai tw 19 national university of singapore hwee tou ng sg 21 kookmin university seungshik kang ko 23 us dept of defense thomas keenan us 24 dept of information managementtung nan institute of technology jialin tsai tw 26 icl peking university huiming duan zh 27 yahooinc aitao chen us 29 the chinese university of hongkong tak pang lau hk 31 city university of hong kong ka po chow hk 33 city university of hong kong chun yu kit hk 34 institute of computing technologychinese academy of sciences shuanglong li zh table 2participating groups 125 add a corpus to the set they initially selected though at least one asked us via email to add one and this was done manuallygroups were given until july 27 to train their systems when the testing data was released on the web sitethey then had two days to process the test corpora and return them to the organizer via email on jul 29 for scoringeach participants results were posted to their section of the web site onaugust 6 and the summary results for all par ticipants were made available to all groups on august 12two tracks were available for each corpus open and closed in the open tests participants could use any external data in addition to the training corpus to train their systemthis included but was not limited to external lexica character set knowledge partofspeech information etc sites participating in an open test were required to describe this external data in their system descriptionin closed tests participants were only allowed to use information found in the training dataabsolutely no other data or information could be used beyond that in the training documentthis included knowledge of character sets punctuation characters etc these seemingly artificial restrictions were formulated to studyexactly how far one can get without sup plemental informationother obvious restrictions applied groups could not participate using corpora that they or their organization provided or that they had used before or otherwise seensites were allowed submit multiple runs within a track allowing them to compare various approachesscoring was done automatically using acombination of perl and she will scriptspartici pants were asked to submit their data using very strict naming conventions to facilitate this inonly a couple of instances were these not fol lowed and human intervention was requiredafter the scoring was done the script would mail the detailed results to the participantthe scripts used for scoring can be downloaded from the corpus word count r p f oov roov riv as 122610 0909 0857 0882 0043 0004 0950 cityu 40936 0882 0790 0833 0074 0000 0952 msr 106873 0955 0912 0933 0026 0000 0981 pku 104372 0904 0836 0869 0058 0059 0956 table 3 baseline scores generated via maximal matching using only words from the training data corpus word count r p f oov roov riv as 122610 0979 0985 0982 0043 0996 0978 cityu 40936 0988 0991 0989 0074 0997 0988 msr 106873 0991 0992 0991 0026 0998 0990 pku 104372 0985 0988 0987 0058 0994 0985 table 4 topline scores generated via maximal matching using only words from the testing data 126 bakeoff 2005 web siteit was provided to the participants to aid in the their data analysisas noted above some of the trainingtruth data used a fullwidth space to separate tokens the scoring script was modified to ignore the differences between fullwidth and halfwidth spacesthis is the only case where the halfwidthfullwidth distinction was ignored a system that convertedtokens from fullwidth to halfwidth was penal ized by the script23participating sitesthirtysix sites representing 10 countries ini tially signed up for the bakeoffthe peoples republic of china had the greatest number with 17 followed by the united states hong kong taiwan six others with one eachof these 23 submitted results for scoring andsubsequently submitted a paper for these pro ceedingsa summary of participating groups and the tracks for which they submitted results can be found in table2 on the preceding pageall together 130 runs were submitted for scoring3results in order to provide hypothetical best and worst case results we used a simple leftto right maximal matching algorithm implemented in perl to generate toplineand baselineparticipant run id word count r cr p cp f oov roov riv 15 b 122610 0952 000122 0951 000123 0952 0043 0696 0963 15 a 122610 0955 000118 0939 000137 0947 0043 0606 0971 14 122610 095 000124 0943 000132 0947 0043 0718 0960 27 122610 0955 000118 0934 000142 0945 0043 0468 0978 12 122610 0946 000129 0942 000134 0944 0043 0648 0959 7 122610 0947 000128 0934 000142 094 0043 0523 0966 15 c 122610 0944 000131 0934 000142 0939 0043 0445 0967 33 122610 0944 000131 0902 000170 0923 0043 0234 0976 5 122610 0948 000127 0900 000171 0923 0043 0158 0983 4 122610 0943 000132 0895 000175 0918 0043 0137 0979 table 5academia sinica closed participant run id word count r cr p cp f oov roov riv 19 122610 0962 000109 095 000124 0956 0043 0684 0975 27 122610 0958 000115 0938 000138 0948 0043 0506 0978 12 122610 0949 000126 0947 000128 0948 0043 0686 0961 7 122610 0955 000118 0938 000138 0946 0043 0579 0972 31 122610 0943 000132 0931 000145 0937 0043 0531 0962 4 122610 0952 000122 092 000155 0936 0043 0354 0979 5 122610 0952 000122 0919 000156 0935 0043 0311 0981 table 6academia sinica open 127 numbersthis was done by generating word listsbased only on the vocabulary in each truth and training corpus and segmenting the respective test corporathese results are presented in tables3 and 4all of the results comprise the following data test recall test precision balancedf score the outof vocabulary rate on the test corpus the recall on oov words and the recall on invocabulary words we use the usual definition of outofvocabulary words as the set of words occurring in the test corpus that are not in the training corpusas in the previous evaluation to test the confidence level that two trials are significantly different from each other we used the central limit theorem for bernoulli trials assuming that the recall rates from the various trials represents the probability that a word will be successfully identified and that a binomial distribution is appropriate for the experimentwe calculated these values at the 95 confidence interval with the formula 2 participant run id word count r cr p cp f oov roov riv 19 40936 0967 000177 0956 000203 0962 0074 0806 098 16 40936 0958 000198 095 000215 0954 0074 0775 0973 27 40936 0952 000211 0937 000240 0945 0074 0608 098 7 40936 0944 000227 0938 000238 0941 0074 0667 0966 12 40936 0933 000247 094 000235 0936 0074 0653 0955 4 40936 0946 000223 0898 000299 0922 0074 0417 0989 5 40936 094 000235 0901 000295 092 0074 041 0982 table 8 city university of hong kong open 128 n where n is the number of wordsthis value appears in subsequent tables under the column crwe also calculate the confidence that the a character string segmented as a word is actually a word by treating p as the precision rates of each systemthis is referred to as cp inthe result tablestwo systems are then considered to be statistically different if one of their cr or cp are differenttables 512 contain the results for each corpus and track ordered by f scoreparticipant run id word count r cr p cp f oov roov riv 14 106873 0962 000117 0966 000111 0964 0026 0717 0968 7 106873 0962 000117 0962 000117 0962 0026 0592 0972 27 a 106873 0969 000106 0952 000131 0960 0026 0379 0985 27 b 106873 0968 000108 0953 000129 0960 0026 0381 0984 4 106873 0973 000099 0945 000139 0959 0026 0323 0991 15 b 106873 0952 000131 0964 000114 0958 0026 0718 0958 5 106873 0974 000097 0940 000145 0957 0026 021 0995 13 106873 0959 000121 0956 000125 0957 0026 0496 0972 12 106873 0952 000131 0960 000120 0956 0026 0673 096 24 6 106873 0958 000123 0952 000131 0955 0026 0503 097 24 7 106873 0958 000123 0952 000131 0955 0026 0504 097 24 4 106873 0958 000123 0949 000135 0954 0026 0465 0972 24 5 106873 0958 000123 0951 000132 0954 0026 0493 0971 24 3 106873 0968 000108 0938 000148 0953 0026 0205 0989 33 106873 0965 000112 0935 000151 0950 0026 0189 0986 15 a 106873 0955 000127 0942 000143 0949 0026 0378 0971 21 106873 0945 000139 0949 000135 0947 0026 0576 0955 24 0 106873 0956 000125 0938 000148 0947 0026 0327 0973 34 106873 0948 000136 0942 000143 0945 0026 0664 0955 24 2 106873 0964 000114 0924 000162 0944 0026 0025 0989 15 c 106873 0964 000114 0923 000163 0943 0026 0025 099 24 1 106873 0963 000115 0924 000162 0943 0026 0025 0989 29 a 106873 0946 000138 0933 000153 0939 0026 0587 0956 29 b 106873 0941 000144 0932 000154 0937 0026 0624 095 8 b 106873 0957 000124 0917 000169 0936 0026 0025 0982 8 c 106873 0955 000127 0915 000171 0935 0026 0025 098 26 106873 0937 000149 0928 000158 0932 0026 0457 095 8 a 106873 0898 000185 0896 000187 0897 0026 0327 0914 table 9 microsoft research closed 129 4discussion across all of the corpora the best performing system in terms of f score achieved a 0972 with an average of 0918 and median of 0941as one would expect the best f score on the open tests was higher than the best on the closed tests 0972 vs 0964 both on the msr corpusthis result follows from the fact that systems taking part on the open test can utilize moreinformation than those on the closedalso interesting to compare are the oov recall rates be tween the open and closed tracksthe best oov recall in the open evaluation was 0872 compared to just 0813 on the closed trackthese data indicate that oov handling is still the achilles heel of segmentation systems even when the oov rates are relatively smallthese oov recall scores are better than those observed in the first bakeoff in 2003 with similar oovvalues which suggests that advances in unknown word recognition have occurrednever theless oov is still the most significant problem in segmentation systemsthe best score on any track in the 2003 bakeoff was f0961 while the best for this evaluation was f0972 followed by 17 other scores above 0961this shows a general trend to a decrease in error rates from 39 to 28these scores are still far below the theoretical 099 level reflected in the topline and the higher numbers often reflected in the literatureit is plain that one can construct a test set that any given system will achieve very high measures of precision and recall on but these numbers must viewed with caution as they may not scale to other applications or other problem setsthree participants that used the scoringscript in their system evaluation observed differ ent behavior from that of the organizers in the participant run id word count r cr p cp f oov roov riv 4 106873 098 000086 0965 000112 0972 0026 059 099 19 106873 0969 000106 0968 000108 0968 0026 0736 0975 7 106873 0969 000106 0966 000111 0967 0026 0612 0979 27 b 106873 0971 000103 0961 000118 0966 0026 0512 0983 5 106873 0975 000096 0957 000124 0966 0026 0453 0989 13 106873 0959 000121 0971 000103 0965 0026 0785 0964 27 a 106873 097 000104 0957 000124 0963 0026 0466 0984 12 106873 095 000133 0958 000123 0954 0026 0648 0958 26 106873 0925 000161 0936 000150 0930 0026 0617 0933 8 a 106873 094 000145 0917 000169 0928 0026 0239 0959 34 106873 0916 000170 0933 000153 0924 0026 0705 0922 8 c 106873 0928 000158 0913 000172 0920 0026 0355 0944 8 b 106873 0923 000163 0914 000172 0918 0026 0354 0938 2 106873 0913 000172 0915 000171 0914 0026 0725 0918 8 d 106873 092 000166 0889 000192 0904 0026 0332 0936 8 e 106873 09 000184 0861 000212 0880 0026 0309 0916 27 c 106873 0865 000209 0844 000222 0855 0026 0391 0878 23 106873 0788 000250 0818 000236 0803 0026 037 08 table 10 microsoft research open 130generation of the recall numbers thereby af fecting the f scorewe were unable to replicate the behavior observed by the participant nor could we determine a common set of software versions that might lead to the problemwe verified our computed scores on two different operating systems and two different hardware architecturesin each case the difference was inthe participants favor though the impact was minimalif there is an error in the scripts then it affects all data sets identically so we are confident in the scores as reported herenevertheless we hope that further investigation will uncover the because of the discrepancy so that it can be rectified in the future41future directions this second bakeoff was an unqualified success both in the number of systems represented and in the demonstrable improvement in segmentation technology since 2003however there are stillopen questions that future evaluations can at tempt to answer including how well a system trained on one genre performs when faced with text from a different registerthis will stressoov handling in the extremeconsider a situa tion where a system trained on prc newswire participant run id word count r cr p cp f oov roov riv 27 104372 0953 000131 0946 000140 095 0058 0636 0972 14 104372 0946 000140 0954 000130 095 0058 0787 0956 6 a 104372 0952 000132 0945 000141 0949 0058 0673 0969 6 b 104372 0952 000132 0943 000144 0947 0058 0673 0969 13 104372 0941 000146 095 000135 0946 0058 0813 0949 7 104372 0943 000144 0944 000142 0944 0058 0656 0961 15 b 104372 093 000158 0951 000134 0941 0058 076 0941 4 104372 0954 000130 0927 000161 0941 0058 0518 0981 34 104372 0938 000149 0942 000145 094 0058 0767 0948 15 a 104372 093 000158 0938 000149 0934 0058 0521 0955 5 104372 095 000135 0919 000169 0934 0058 0449 098 9 104372 0922 000166 0934 000154 0928 0058 0728 0934 12 104372 0919 000169 0935 000153 0927 0058 0593 0939 15 c 104372 0904 000182 093 000158 0917 0058 0325 094 29 a 104372 0926 000162 0908 000179 0917 0058 0535 095 29 c 104372 0918 000170 0915 000173 0917 0058 0621 0936 33 104372 0929 000159 0904 000182 0916 0058 0252 0971 21 104372 09 000186 0925 000163 0912 0058 0389 0931 29 b 104372 0917 000171 0903 000183 091 0058 06 0937 8 a 104372 0906 000181 0886 000197 0896 0058 029 0943 8 c 104372 0907 000180 0843 000225 0874 0058 0082 0958 8 b 104372 0906 000181 0842 000226 0873 0058 0081 0956 table 11 peking university closed 131text is given the chinese translation of the ara bic al jazeera newspapera more detailed evaluation of different techniques for dealing with certain constructs is also in order findingthe right balance of learned and heuristic knowledge is paramounttied to the accuracy per formance of such hybrid systems is the runtime speed the tradeoff between accuracy and throughput is vitally important as more and more data becomes computerizedthe overall effects of the various segmentation standards on the comparison of disparate systems has yet to be studiedin particular a categorization of the differences in standards and the prevalence of the features reflected would be a worth while studyxia compares the penn chinese treebanks standard with those used in taiwanand china and concludes that most disagree ments among these three guidelines do not makemuch difference in bracketing or sentence inter pretationthis is probably not so transparentwhen evaluating segmentation accuracy how everno segmentation study has yet to examine the handling of short strings where there is little surrounding context as in search engine queriesfuture evaluations should be designed to focus on these and other specific areas of interestacknowledgments this bakeoff could not have taken place without the following institutions who provided training and testing data institute of linguistics academia sinica taipei taiwan institute for computational linguistics beijing university beijing china language information sciences research centre city university of hong kong hong kong sar microsoft research asia beijing china i would like to thank gina lavow and churen huang for their organization of the fourth sighan workshop of which this bakeoff is participant run id word count r cr p cp f oov roov riv 19 104372 0968 000109 0969 000107 0969 0058 0838 0976 4 104372 0968 000109 0966 000112 0967 0058 0826 0977 13 104372 0964 000115 097 000106 0967 0058 0864 097 27 a 104372 0964 000115 0966 000112 0965 0058 0841 0971 6 a 104372 0961 000120 0969 000107 0965 0058 0872 0966 6 b 104372 0961 000120 0966 000112 0963 0058 0869 0966 7 104372 0959 000123 0965 000114 0962 0058 0853 0966 5 104372 0964 000115 096 000121 0962 0058 0788 0974 34 104372 0944 000142 0961 000120 0952 0058 0869 0948 16 104372 0945 000141 0956 000127 0951 0058 079 0955 31 104372 0952 000132 0951 000134 0951 0058 0784 0962 8 a 104372 0943 000144 0944 000142 0943 0058 0737 0955 12 104372 0932 000156 0944 000142 0938 0058 0755 0943 8 b 104372 0886 000197 0919 000169 0902 0058 0561 0905 27 b 104372 0877 000203 0904 000182 089 0058 072 0886 23 104372 0781 000256 0846 000223 0813 0058 0628 0791 table 12 peking university open 132 part and john oneil for his comments on an earlier draft of this paperfinally i would also like to thank the participants for their interest and hard work in making this bakeoff a success
I05-3017
the second international chinese word segmentation bakeoffthe second international chinese word segmentation bakeoff was held in the summer of 2005 to evaluate the current state of the art in word segmentationtwenty three groups submitted 130 result sets over two tracks and four different corporawe found that the technology has improved over the intervening two years though the outofvocabulary problem is still of paramount importancein the second international chinese word segmentation bakeoff two of the highest scoring systems in the closed track competition were based on a crf model
a conditional random field word segmenter for sighan bakeoff 2005 we present a chinese word seg mentation system submitted to the closed track of sighan bakeoff 2005 our segmenter was built using a condi tional random field sequence model that provides a framework to use a large number of linguistic features such as character identity morphological and character reduplication features because our morphological features were extracted from the training cor pora automatically our system was not biased toward any particular variety of mandarin thus our system does not overfit the variety of mandarin most familiar to the system designers our final system achieved a fscore of 0947 0943 0950 and 0964 the 2005 sighan bakeoff included four dif ferent corpora academia sinica city university of hong kong peking univer sity and microsoft research asia each of which has its own definition of a wordin the 2003 sighan bakeoff no single model performed well on all corpora included in the taskrather systems tended to do well on corpora largely drawn from a set of similar mandarin varieties to the one they were originally developed foracross cor pora variation is seen in both the lexicons and also in the word segmentation standardswe concluded that for future systems generaliza tion across such different mandarin varieties is crucialto this end we proposed a new model using character identity morphological and character reduplication features in a conditional random field modeling frameworkour system builds on research into condi tional random field a statistical sequence modeling framework first introduced by lafferty et al work by peng et al first used this framework for chinese word segmen tation by treating it as a binary decision task such that each character is labeled either as the beginning of a word or the continuation of onegaussian priors were used to prevent overfitting and a quasinewton method was used for pa rameter optimizationthe probability assigned to a label sequence for a particular sequence of characters by a crf is given by the equation below cc k c cxykkxz xyp f exp is a normalization term fk is a feature function and c indexes into characters in the sequence being labeleda crf allows us to utilize a large number of ngram features and different state sequence 168 based features and also provides an intuitive framework for the use of morphological features31 featuresthe linguistic features used in our model fall into three categories character identity ngramsmorphological and character reduplication fea turesfor each state the character identity features are represented using feature functions that key off of the identity of the character in the current proceeding and subsequent positionsspecifically we used four types of unigram feature functions designated as c0 c1 c1 c2 fur thermore four types of bigram features were used and are notationally designated here as conjunctions of the previously specified unigram features c0c1 c1c0 c1c1 c2c1 and c2c0given that unknown words are normally more than one character long when representing the morphological features as feature functions such feature functions keyed off the morphological information extracted from both the proceeding state and the current stateour morphological features are based upon the intuition re garding unknown word features given in gao et alspecifically their idea was to use productive affixes and characters that only oc curred independently to predict boundaries of unknown wordsto construct a table containing affixes of unknown words rather than using thresholdfiltered affix tables in a separate un known word model as was done in gao et al we first extracted rare words from a corpus and then collected the first and last charac ters to construct the prefix and suffix tablesfor the table of individual character words we col lected an individual character word table for each corpus of the characters that always occurred alone as a separate word in the given cor puswe also collected a list of bigrams from each training corpus to distinguish known strings from unknownadopting all the features together in a model and using the automatically generated morphological tables prevented our system from manually overfitting the mandarin varieties we are most familiar withthe tables are used in the following ways 1 c1c0 unknown word feature functions were created for each specific pair of characters in the bigram tablessuch feature functions are active if the characters in the respective states match the corresponding feature functions charactersthese feature functions are designed to distinguish known strings from unknown2 c1 c0 and c1 individual character feature functions were created for each character in the individual character word table and are likewise active if the respective character matches the feature functions character3 c1 prefix feature functions are defined over characters in the prefix table and fire if the character in the proceeding state matches the feature functions character4 c0 suffix feature functions are defined over suffix table characters and fire if the char acter in the current state matches the feature functions characteradditionally we also use reduplication feature functions that are active based on the repetition of a given characterwe used two such fea ture functions one that fires if the previous and the current character c1 and c0 are identical and one that does so if the subsequent and the previous characters c1 and c1 are identicalmost features appeared in the firstorder tem plates with a few of character identity features in the both zeroorder and firstorder templateswe also did normalization of punctuations due to the fact that mandarin has a huge variety of punctuationstable 1 shows the number of data features and lambda weights in each corpustable 1 the number of features in each corpus of data features of lambda weights as 2558840 8076916 hk 2308067 7481164 pk 1659654 5377146 msr 3634585 12468890 32 experiments321 results on sighan bakeoff 2003 experiments done while developing this system showed that its performance was signifi cantly better than that of peng et al as seen in table 2 our systems fscore was 0863 on ctb versus 0849 f on peng et alwe do not at present have a good understanding of which aspects of our system give it superior performancetable 2 comparisons of peng et al and our f score on the closed track in sighan bakeoff 2003 sighan bakeoff 2003 our fscore fscore peng et al ctb 0863 0849 as 0970 0956 hk 0947 0928 pk 0953 0941 322 results on sighan bakeoff 2005 our final system achieved a fscore of 0947 0943 0950 and 0964 this shows that our system successfully general ized and achieved state of the art performance on all four corporatable 3 performance of the features cumulatively starting with the ngramfscore as hk pk msr ngram 0943 0946 0950 0961 ngram 0953 unkredupl 0947 0943 0950 0964 unkredupl 0952 table 3 lists our results on the four corporawe give our results using just character identity based features character identity features plus unknown words and reduplication featuresour unknown word features only helped on as and msrboth of these corpora have words that have more characters than hk and pkthis in dicates that our unknown word features were more useful for corpora with segmentation stan dards that tend to result in longer wordsin the hk corpus when we added in un known word features our performance droppedhowever we found that the testing data uses different punctuation than the training setour system could not distinguish new word characters from new punctuation since having a com plete punctuation list is considered external knowledge for closed track systemsif the new punctuation were not unknown to us our per formance on hk data would have gone up to 0952 f and the unknown word features would have not hurt the system too muchtable 4 present recalls precisions f scores and recalls on both unknown and known words table 4 detailed performances of each corpus r p f roov riv as 0950 0943 094707180960 hk 0941 0946 094306980961 hk 0952 0952 0952 0791 0965 pk 0946 0954 095007870956 msr 0962 0966 096407170968 33 error analysisour system performed reasonably well on morphologically complex new words such as and where and are suffixeshowever it over generalized to words with frequent suffixes such as and for the corpora that considered 4 character idioms as a word our system combined most of new idioms togetherthis differs greatly from the results that one would likely obtain with a more traditional maxmatch based technique as such an algo rithm would segment novel idiomsone short coming of our system is that it is not robust enough to distinguish the difference between ordinal numbers and numbers with measure nounsfor example and are not distinguishable to our systemin order to avoid this problem it might require having more syntactic knowledge than was implicitly given in the training datafinally some errors are due to inconsistencies in the gold segmentation of nonhanzi char acterfor example pentium4is a word but pc133is two wordssometimes 8is a word but sometimes it is segmented into two words170our system used a conditional random field sequence model in conjunction with character identity features morphological features and character reduplication featureswe extracted our morphological information automatically to prevent overfitting mandarin from particular mandarinspeaking areaour final system achieved a fscore of 0947 0943 0950 and 0964 thanks to kristina toutanova for her generous help and to jenny rose finkel who devel oped such a great conditional random field packagethis work was funded by the ad vanced research and development activity advanced question answering for intelligence program national science foundation award iis0325646 and a stanford graduate fellow ship
I05-3027
a conditional random field word segmenter for sighan bakeoff 2005we present a chinese word segmentation system submitted to the closed track of sighan bakeoff 2005our segmenter was built using a conditional random field sequence model that provides a framework to use a large number of linguistic features such as character identity morphological and character reduplication featuresbecause our morphological features were extracted from the training corpora automatically our system was not biased toward any particular variety of mandarinthus our system does not overfit the variety of mandarin most familiar to the system designersour final system achieved a fscore of 0947 0943 0950 and 0964 we develop the stanford chinese word segmenter
using contextual speller techniques and language modeling for esl error correction we present a modular system for detection and correction of errors made by non native writers we focus on two error types the incorrect use of determiners and the choice of prepositions we use a decision tree approach inspired by contextual spelling systems for detection and correction suggestions and a large language model trained on the gigaword corpus to provide additional information to filter out spurious suggestions we show how this system performs on a corpus of nonnative english text and discuss strategies for future enhancements english is today the de facto lingua franca for commerce around the globeit has been estimated that about 750m people use english as a second language as opposed to 375m native english speakers while as much as 74 of writing in english is done by nonnative speakershowever the errors typically targeted by commercial proofing tools represent only a subset of errors that a nonnative speaker might makefor example while many nonnative speakers may encounter difficulty choosing among prepositions this is typically not a significant problem for native speakers and hence remains unaddressed in proofing tools such as the grammar checker in microsoft word plainly there is an opening here for automated proofing tools that are better geared to the nonnative usersone challenge that automated proofing tools face is that writing errors often present a semantic dimension that renders it difficult if not impossible to provide a single correct suggestionthe choice of definite versus indefinite determinera common error type among writers with a japanese chinese or korean language background owing to the lack of overt markers for definiteness and indefinitenessis highly dependent on larger textual context and world knowledgeit seems desirable then that proofing tools targeting such errors be able to offer a range of plausible suggestions enhanced by presenting realworld examples that are intended to inform a users selection of the most appropriate wording in the context1our system currently targets eight different error types 1preposition presence and choicein the other hand 2definite and indefinite determiner presenceand choice i am teacheri am interesting in this bookmy teacher does is a good teacher 1 liu et al 2000 take a similar approach retrievingexample sentences from a large corpus449i writed a letter this is a china book compounds i am a student of university 8noun pluralizationthey have many knowledges in this paper we will focus on the two most prominent and difficult errors choice of determiner and prepositionsempirical justification for targeting these errors comes from inspection of several corpora of nonnative writingin the nict japanese learners of english corpus 266 of all errors are determiner related and about 10 are preposition related making these two error types the dominant ones in the corpusalthough the jle corpus is based on transcripts of spoken language we have no reason to believe that the situation in written english is substantially differentthe chinese learners of english corpus has a coarser and somewhat inconsistent error tagging scheme that makes it harder to isolate the two errors but of the nonorthographic errors more than 10 are determiner and number relatedroughly 2 of errors in the corpus are tagged as prepositionrelated but other preposition errors are subsumed under the collocation errorcategory which makes up about 5 of errors3 related workmodels for determiner and preposition selection have mostly been investigated in the context of sentence realization and machine translation such approaches typically rely on the fact that preposition or determiner choice is made in otherwise nativelike sentencesturner and charniak for example utilize a language model based on a statistical parser for penn tree bank datasimilarly de felice and pulman utilize a set of sophisticated syntactic and semantic analysis features to predict 5 common english prepositionsobviously this is impractical in a setting where noisy nonnative text is subjected to proofingmeanwhile work on automated error detection on nonnative text focuses primarily on detection of errors rather than on the more difficult task of supplying viable corrections more recently han et al use a maximum entropy classifier to propose article corrections in tesol essays while izumi et al and chodorow et al present techniques of automatic preposition choice modelingthese more recent efforts nevertheless do not attempt to integrate their methods into a more general proofing application designed to assist nonnative speakers when writing englishfinally yi et al designed a system that uses web counts to determine correct article usage for a given sentence targeting esl users4 system descriptionour system consists of three major components 1suggestion provider 2language model 3example provider the suggestion provider contains modules for each error type discussed in section 2sentences are tokenized and partofspeech tagged before they are presented to these moduleseach module determines parts of the sentence that may contain an error of a specific type and one or more possible correctionsfour of the eight errorspecific modules mentioned in section 2 employ machine learned techniques the other four are based on heuristicsgerundinfinitive confusion and auxiliary presencechoice each use a single classifierpreposition and determiner modules each use two classifiers one to determine whether a prepositionarticle should be present and one for the choice of prepositionarticleall suggestions from the suggestion provider are collected and passed through the language modelas a first step a suggested correction has to have a higher language model score than the original sentence in order to be a candidate for being surfaced to the usera second set of heuristic thresholds is based on a linear combination of class probability as assigned by the classifier and language model scorethe example provider queries the web for exemplary sentences that contain the suggested correctionthe user can choose to consult this information to make an informed decision about the correction450 41 suggestion provider modules fordeterminers and prepositions the sp modules for determiner and preposition choice are machine learned componentsideally one would train such modules on large data sets of annotated errors and corrected counterpartssuch a data set however is not currently availableas a substitute we are using native english text for training currently we train on the full text of the english encarta encyclopedia and a random set of 1m sentences from a reuters news data setthe strategy behind these modules is similar to a contextual speller as described for example in for each potential insertion point of a determiner or preposition we extract context features within a window of six tokens to the right and to the leftfor each token within the window we extract its relative position the token string and its partof speech tagpotential insertion sites are determined heuristically from the sequence of pos tagsbased on these features we train a classifier for preposition choice and determiner choicecurrently we train decision tree classifiers with the winmine toolkit we also experimented with linear svms but decision trees performed better overall and training and parameter optimization were considerably more efficientbefore training the classifiers we perform feature ablation by imposing a count cutoff of 10 and by limiting the number of features to the top 75k features in terms of log likelihood ratio we train two separate classifiers for both determiners and preposition decision whether or not a determinerpreposition should be present decision which determinerpreposition is the most likely choice given that a determinerpreposition is present in the case of determiners class values for the ch classifier are aan and thepreposition choice is limited to a set of 13 prepositions that figure prominently in the errors observed in the jle corpus about as at by for from in like of on since to with than other the decision tree classifiers produce probability distributions over class values at their leaf nodesfor a given leaf node the most likely prepositiondeterminer is chosen as a suggestionif there are other class values with probabilities above heuristically determined thresholds2 those are also included in the list of possible suggestionsconsider the following example of an article related error i am teacher from koreaas explained above the suggestion provider module for article errors consists of two classifiers one for presenceabsence of an article the other for article choicethe string above is first tokenized and then partofspeech tagged 0iprp 1amvbp 2teachernn 3fromin 4koreannp 5based on the sequence of pos tags and capitalization of the nouns a heuristic determines that there is one potential noun phrase that could contain an article teacherfor this possible article position the article presenceabsence classifier determines the probability of the presence of an article based on a feature vector of pos tags and surrounding lexical items p 054 given that the probability of an article in this position is higher than the probability of not having an article the second classifier is consulted to provide the most likely choice of article p 004 p 096 given this probability distribution a correction suggestion i am teacher from korea i am a teacher from korea is generated and passed on to evaluation by the language model component42 the language modelthe language model is a 5gram model trained on the english gigaword corpus in order to preserve context information as much as possible we used interpolated kneser ney smoothing without count cutoffwith a 120kword vocabulary the trained language model contains 54 million bigrams 338 million trigrams 801 million 4grams 2 again we are working on learning these thresholdsempirically from data451 and 12 billion 5gramsin the example from the previous section the two alternative strings of the original user input and the suggested correction are scored by the language model i am teacher from koreascore 019 i am a teacher from koreascore 060 the score for the suggested correction is significantly higher than the score for the original so the suggested correction is provided to the user43 the example providerin many cases the sp will produce several alternative suggestions from which the user may be able to pick the appropriate correction reliablyin other cases however it may not be clear which suggestion is most appropriatein this event the user can choose to activate the example provider which will then perform a web search to retrieve relevant example sentences illustrating the suggested correctionfor each suggestion we create an exact string query including a small window of context to the left and to the right of the suggested correctionthe query is issued to a search engine and the retrieved results are separated into sentencesthose sentences that contain the string query are added to a list of example candidatesthe candidates are then ranked by two initially implemented criteria sentence length and context overlap we have not yet performed a user study to evaluate the usefulness of the examples provided by the systemsome examples of usage that we retrieve are given below with the query string in boldface original i am teacher from koreasuggestion i am a teacher from koreaall top 3 examples i am a teacheroriginal so smokers have to see doctor more often than nonsmokerssuggestion so smokers have to see a doctor more often than nonsmokerstop 3 examples 1do people going through withdrawal haveto see a doctor2usually a couple should wait to see a doctor until after they have tried to get pregnant for a year3if you have had congestion for over a week you should see a doctororiginal i want to travel disneyland in marchsuggestion i want to travel to disneyland in marchtop 3 examples 1timothy wish was to travel todisneyland in california2should you travel to disneyland incalifornia or to disney world in florida3the tourists who travel to disneyland incalifornia can either choose to stay in disney resorts or in the hotel for disneyland vacations5 evaluationwe perform two different types of evaluation on our systemautomatic evaluation is performed on native text under the assumption that the native text does not contain any errors of the type targeted by our systemfor example the original choice of preposition made in the native text would serve as supervision for the evaluation of the preposition modulehuman evaluation is performed on non native text with a human rater assessing each suggestion provided by the system51 individual sp modulesfor evaluation we split the original training data discussed in section 41 into training and test sets we then retrained the classifiers on this reduced training set and applied them to the heldout test setsince there are two models one for prepositiondeterminer presence and absence and one for prepositiondeterminer choice we report combined accuracy numbers of the two classifiersvotes stands for the counts of votes for class value absence from pa votes stands for counts of votes for presence from pa acc is the accuracy of the pa classifier acc the accuracy of the choice classifiercombined accuracy is defined as in equation 1equation 1 combined accuracy of the presenceabsence and choice models 452 the total number of cases in the test set is 1578342 for article correction and 1828438 for preposition correction511 determiner choice accuracy of the determiner pa and ch models and their combination is shown in table 1model pa ch combined accuracy 8961 8597 8607 table 1 accuracy of the determiner pa ch and combined modelsthe baseline is 699 the overall accuracy of this module is stateoftheart compared with results reported in the literature turner and charniak 2007 obtained the best reported accuracy to date of 8674 using a charniak language model based on a full statistical parser on the penn tree bankthese numbers are of course not directly comparable given the different corporaon the other hand the distribution of determiners is similar in the ptb and in our data ptb reutersencarta mix no determiner 700 699 the 206 222 aan 94 78 table 2 distribution of determiners in the penn tree bank and in our reutersencarta dataprecision and recall numbers for both models on our test set are shown in table 3 and table 4article pa classifier precision recall presence 8499 7954 absence 9143 9395 table 3 precision and recall of the article pa classifierarticle ch classifier precision recall the 8873 9281 aan 7655 6658 table 4 precision and recall of the article ch classifier512 preposition choice the preposition choice model and the combined model achieve lower accuracy than the corresponding determiner models a result that can be expected given the larger choice of candidates and hardness of the taskaccuracy numbers are presented in table 5model pa ch combined accuracy 9106 6232 8607 table 5accuracy of the preposition pa ch and combined modelsthe baseline in this task is 2894 precision and recall numbers are shown in table 6 and table 7from table 7 it is evident that prepositions show a wide range of predictabilityprepositions such as than and about show high recall and precision due to the lexical and morphosyntactic regularities that govern their distributionat the low end the semantically more independent prepositions since and at show much lower precision and recall numberspreposition pa classifier precision recall presence 9082 8720 absence 9122 9378 table 6 precision and recall of the preposition pa classifierpreposition ch classifier precision recall other 5375 5441 in 5593 6293 for 5618 3876 of 6809 8585 on 4694 2447 to 7954 5172 with 6486 2500 at 5000 2967 by 4286 6046 as 7678 6418 from 8113 3909 since 5000 1000 about 9388 6970 than 9524 9091 table 7 precision and recall of the preposition ch classifier453 chodorow et al present numbers on an independently developed system for detection of preposition error in nonnative englishtheir approach is similar to ours in that they use a classifier with contextual feature vectorsthe major differences between the two systems are the additional use of a language model in our system and from a usability perspective in the example provider module we added to the correction processsince both systems are evaluated on different data sets3 however the numbers are not directly comparable52 language model impactthe language model gives us an additional piece of information to make a decision as to whether a correction is indeed validinitially we used the language model as a simple filter any correction that received a lower language model score than the original was filtered outas a first approxi mation this was an effective step it reduced the number of preposition corrections by 668 and the determiner corrections by 507 and increased precision dramaticallythe language model alone however does not provide sufficient evidence if we produce a full set of preposition suggestions for each potential preposition location and rank these suggestions by lm score alone we only achieve 5836 accuracy on reuters datagiven that we have multiple pieces of information for a correction candidate namely the class probability assigned by the classifier and the language model score it is more effective to combine these into a single score and impose a tunable threshold on the score to maximize precisioncurrently this threshold is manually set by analyzing the flags in a development set53 human evaluationa complete human evaluation of our system would have to include a thorough user study and would need to assess a variety of criteria from the accuracy of individual error detection and corrections to the general helpfulness of real web based example sentencesfor a first human evaluation of our system prototype we decided to 3 chodorow et al evaluate their system onproprietary student essays from nonnative students where they achieve 778 precision at 304 recall for the preposition substitution tasksimply address the question of accuracy on the determiner and preposition choice tasks on a sample of nonnative textfor this purpose we ran the system over a random sample of sentences from the clec corpus an independent judge annotated each flag produced by the system as belonging to one of the following categories the correction is valid and fixes the problem the error is correctly identified but the suggested correction does not fix it the original and the rewrite are both equally good the error is at or near the suggested correction but it is a different kind of error there is a spelling error at or near the correction the correction is wrong the original is correct table 8 shows the results of this human assessment for articles and prepositionsarticles prepositions count ratio count ratio correction is valid 240 55 165 46 error identified suggestion does not fix it 10 2 17 5 original and suggestion equally good 17 4 38 10 misdiagnosis 65 15 46 13 spelling error near correction 37 8 20 6 original correct 70 16 76 21 table 8 article and preposition correction accuracy on clec datathe distribution of corrections across deletion insertion and substitution operations is illustrated in table 9the most common article correction is insertion of a missing articlefor prepositions substitution is the most common correction again an expected result given that the presence of a 454 preposition is easier to determine for a nonnative speaker than the actual choice of the correct prepositiondeletion insertion substitution articles 8 79 13 prepositions 15 10 76 table 9 ratio of deletion insertion and substitution operations6 conclusion and future workhelping a nonnative writer of english with the correct choice of prepositions and definiteindefinite determiners is a difficult challengeby combining contextual speller based methods with language model scoring and providing webbased examples we can leverage the combination of evidence from multiple sourcesthe human evaluation numbers presented in the previous section are encouragingarticle and preposition errors present the greatest difficulty for many learners as well as machines but can nevertheless be corrected even in extremely noisy text with reasonable accuracyproviding contextually appropriate reallife examples alongside with the suggested correction will we believe help the nonnative user reach a more informed decision than just presenting a correction without additional evidence and informationthe greatest challenge we are facing is the reduction of false flags ie flags where both error detection and suggested correction are incorrectsuch flagsespecially for a nonnative speakercan be confusing despite the fact that the impact is mitigated by the set of examples which may clarify the picture somewhat and help the users determine that they are dealing with an inappropriate correctionin the current system we use a set of carefully crafted heuristic thresholds that are geared towards minimizing false flags on a development set based on detailed error analysisas with all manually imposed thresholding this is both a laborious and brittle process where each retraining of a model requires a retuning of the heuristicswe are currently investigating a learned ranker that combines information from language model and classifiers using web counts as a supervision signalwe thank claudia leacock for her meticulous analysis of errors and human evaluation of the system output as well as for much invaluable feedback and discussion
I08-1059
using contextual speller techniques and language modeling for esl error correctionwe present a modular system for detection and correction of errors made by nonnative writerswe focus on two error types the incorrect use of determiners and the choice of prepositionswe use a decisiontree approach inspired by contextual spelling systems for detection and correction suggestions and a large language model trained on the gigaword corpus to provide additional information to filter out spurious suggestionswe show how this system performs on a corpus of nonnative english text and discuss strategies for future enhancementswe use a language model in addition to a classifier and combine the classifier output and language model scores in a meta classifierwe use a single language model score on hypothesized error and potential correction to filter out unlikely correction candidates
learning dependency translation models as collections of finitestate head transducers the paper defines weighted head transducers finitestate machines that perform middleout string transduction these transducers are strictly more expressive than the special case of standard lefttoright finitestate transducers dependency transduction models are then defined as collections of weighted head transducers that are applied hierarchically a dynamic programming search algorithm is described for finding the optimal transduction of an input string with respect to a dependency transduction model a method for automatically training a dependency transduction model from a set of inputoutput example strings is presented the method first searches for hierarchical alignments of the training examples guided by correlation statistics and then constructs the transitions of head transducers that are consistent with these alignments experimental results are given for applying the training method to translation from english to spanish and japanese the paper defines weighted head transducers finitestate machines that perform middleout string transductionthese transducers are strictly more expressive than the special case of standard lefttoright finitestate transducersdependency transduction models are then defined as collections of weighted head transducers that are applied hierarchicallya dynamic programming search algorithm is described for finding the optimal transduction of an input string with respect to a dependency transduction modela method for automatically training a dependency transduction model from a set of inputoutput example strings is presentedthe method first searches for hierarchical alignments of the training examples guided by correlation statistics and then constructs the transitions of head transducers that are consistent with these alignmentsexperimental results are given for applying the training method to translation from english to spanish and japanesewe will define a dependency transduction model in terms of a collection of weighted head transducerseach head transducer is a finitestate machine that differs from quotstandardquot finitestate transducers in that instead of consuming the input string left to right it consumes it quotmiddle outquot from a symbol in the stringsimilarly the output of a head transducer is built up middle out at positions relative to a symbol in the output stringthe resulting finitestate machines are more expressive than standard lefttoright transducersin particular they allow longdistance movement with fewer states than a traditional finitestate transducer a useful property for the translation task to which we apply them in this paperin section 2 we introduce head transducers and explain how inputoutput positions on state transitions result in middleout transductionwhen applied to the problem of translation the head transducers forming the dependency transduction model operate on input and output strings that are sequences of dependents of corresponding headwords in the source and target languagesthe dependency transduction model produces synchronized dependency trees in which each local tree is produced by a head transducerin other words the dependency model applies the head transducers recursively imposing a recursive decomposition of the source and target stringsa dynamic programming search algorithm finds optimal derivations of target strings from input strings or word lattices produced by a speech recognizersection 3 defines dependency transduction models and describes the search algorithmwe construct the dependency transduction models for translation automatically from a set of unannotated examples each example comprising a source string and a corresponding target stringthe recursive decomposition of the training examples results from an algorithm for computing hierarchical alignments of the examples described in section 42this alignment algorithm uses dynamic programming search guided by sourcetarget word correlation statistics as described in section 41having constructed a hierarchical alignment for the training examples a set of head transducer transitions are constructed from each example as described in section 43finally the dependency transduction model is constructed by aggregating the resulting head transducers and assigning transition weights which are log probabilities computed from the training counts by simple maximum likelihood estimationwe have applied this method of training statistical dependency transduction models in experiments on englishtospanish and englishtojapanese translations of transcribed spoken utterancesthe results of these experiments are described in section 5 our concluding remarks are in section 6in this section we describe the basic structure and operation of a weighted head transducerin some respects this description is simpler than earlier presentations for example here final states are simply a subset of the transducer states whereas in other work we have described the more general case in which final states are specified by a probability distributionthe simplified description is adequate for the purposes of this paperformally a weighted head transducer is a 5tuple an alphabet w of input symbols an alphabet v of output symbols a finite set q of states go qs a set of final states f c q and a finite set t of state transitionsa transition from state q to state q has the form where w is a member of w or is the empty string e v is a member of v or 6 the integer a is the input position the integer 0 is the output position and the real number c is the weight or cost of the transitiona transition in which a 0 and f3 0 is called a head transitionthe interpretation of q q w and v in transitions is similar to lefttoright transducers ie in transitioning from state q to state q the transducer quotreadsquot input symbol w and quotwritesquot output symbol v and as usual if w is then no read takes place for the transitionthe difference lies in the interpretation of the read position a and the write position 0to interpret the transition positions as transducer actions we consider notional input and output tapes divided into squareson such a tape one square is numbered 0 and the other squares are numbered 1 2 rightwards from square 0 and 1 2 leftwards from square 0 a transition with input position a and output position 0 is interpreted as reading w from square a on the input tape and writing v to square 0 of the output tape if square 0 is already occupied then v is written to the next empty square to the left of 0 if 0 and similarly if input was already read from position a w is taken from the next unread square to the left of a if a 0the operation of a head transducer is nondeterministicit starts by taking a head transition kg 9 wo vo 0 0 c where wo is one of the symbols in the input string wo is considered to be at square 0 of the input tape and vo is output at square 0 of the output tapefurther state transitions may then be taken until a final state in f is reachedfor a derivation to be valid it must read each symbol in the input string exactly onceat the end of a derivation the output string is formed by taking the sequence of symbols on the target tape ignoring any empty squares on this tapethe cost of a derivation of an input string to an output string by a weighted head transducer is the sum of the costs of transitions taken in the derivationwe can now define the stringtostring transduction function for a head transducer to be the function that maps an input string to the output string produced by the lowestcost valid derivation taken over all initial states and initial symbolsin the transducers produced by the training method described in this paper the source and target positions are in the set 1 01 though we have also used handcoded transducers and automatically trained transducers with a larger range of positionsthe operation of a traditional lefttoright transducer can be simulated by a head transducer by starting at the leftmost input symbol and setting the positions of the first transition taken to a 0 and 3 0 and the positions for subsequent transitions to a 1 and 3 1however we can illustrate the fact that head transducers are more head transducer to reverse an input string of arbitrary length in the alphabet a b expressive than lefttoright transducers by the case of a finitestate head transducer that reverses a string of arbitrary lengthfor example the head transducer described below with input alphabet a b will reverse an input string of arbitrary length in that alphabetthe states of the example transducer are q qi q2 and f q2 and it has the following transitions the only possible complete derivations of the transducer read the input string right to left but write it left to right thus reversing the stringanother similar example is using a finitestate head transducer to convert a palindrome of arbitrary length into one of its component halvesthis clearly requires the use of an empty string on some of the output transitionsin this section we describe dependency transduction models which can be used for machine translation and other transduction tasksthese models consist of a collection of head transducers that are applied hierarchicallyapplying the machines hierarchically means that a nonhead transition is interpreted not simply as reading an inputoutput pair but instead as reading and writing a pair of strings headed by according to the derivation of a subnetworkfor example the head transducer shown in figure 3 can be applied recursively in order to convert an arithmetic expression from infix to prefix notation in the case of machine translation the transducers derive pairs of dependency trees a source language dependency tree and a target dependency treea dependency tree for a sentence in the sense of dependency grammar is a tree in which the words of the sentence appear as nodes in such a tree the parent of a node is its head and the child of a node is the node dependentthe source and target dependency trees derived by a dependency transduction model are ordered ie there is an ordering on the nodes of each local treethis synchronized dependency trees derived for transducing i want to make a collect call into quiero hacer una llamada de cobrar means in particular that the target sentence can be constructed directly by a simple recursive traversal of the target dependency treeeach pair of source and target trees generated is synchronized in the sense to be formalized in section 42an example is given in figure 4head transducers and dependency transduction models are thus related as follows each pair of local trees produced by a dependency transduction derivation is the result of a head transducer derivationspecifically the input to such a head transducer is the string corresponding to the flattened local source dependency treesimilarly the output of the head transducer derivation is the string corresponding to the flattened local target dependency treein other words the head transducer is used to convert a sequence consisting of a headword w and its left and right dependent words to a sequence consisting of a target word v and its left and right dependent words since the empty string may appear in a transition in place of a source or target symbol the number of source and target dependents can be differentthe cost of a derivation produced by a dependency transduction model is the sum of all the weights of the head transducer derivations involvedwhen applying a dependency transduction model to language translation we choose the target string obtained by flattening the target tree of the lowestcost dependency derivation that also generates the source stringwe have not yet indicated what weights to use for head transducer transitionsthe definition of head transducers as such does not constrain thesehowever for a dependency transduction model to be a statistical model for generating pairs of strings we assign transition weights that are derived from conditional probabilitiesseveral head transducer converts the sequences of left and right dependents and of w into left and right dependents and of v probabilistic parameterizations can be used for this purpose including the following for a transition with headwords w and v and dependent words w and v phere q and q are the fromstate and tostate for the transition and a and 13 are the source and target positions as beforewe also need parameters p for the probability of choosing a head transition given this pair of headwordsto start the derivation we need parameters p for the probability of choosing wovo as the root nodes of the two treesthese model parameters can be used to generate pairs of synchronized dependency trees starting with the topmost nodes of the two trees and proceeding recursively to the leavesthe probability of such a derivation can be expressed as for a derivation in which the dependents of w and v are generated by n transitionsto carry out translation with a dependency transduction model we apply a dynamic programming search to find the optimal derivationthis algorithm can take as input either word strings or word lattices produced by a speech recognizerthe algorithm is similar to those for contextfree parsing such as chart parsing and the cky algorithm since word string input is a special case of word lattice input we need only describe the case of latticeswe now present a sketch of the transduction algorithmthe algorithm works bottomup maintaining a set of configurationsa configuration has the form fli n2 w v q c t corresponding to a bottomup partial derivation currently in state q covering an input sequence between nodes n1 and n2 of the input lattice w and v are the topmost alshawi bangalore and douglas learning dependency translation models nodes in the source and target derivation treesonly the target tree t is stored in the configurationthe algorithm first initializes configurations for the input words and then performs transitions and optimizations to develop the set of configurations bottomup such an initial configuration has the form nn wo vo qcvo it is applicable when there are the following head and dependent configurations where the dependent configuration is in a final state qfthe result of applying the transition is to add the following to the set of configurations ni n3 w v q c ci c g where t is the target dependency tree formed by adding t1 as the rightmost dependent of t nnwvqci ti nn w v qc2t2 and c2 cl the second configuration is removed from the set of configurationsif after all applicable transitions have been taken there are configurations spanning the entire input lattice then the one with the lowest cost is the optimal derivationwhen there are no such configurations we take a pragmatic approach in the translation application and simply concatenate the lowest costing of the minimal length sequences of partial derivations that span the entire latticea viterbilike search of the graph formed by configurations is used to find the optimal sequence of derivationsone of the advantages of middleout transduction is that robustness is improved through such use of partial derivations when no complete derivations are availableour training method for head transducer models only requires a set of training exampleseach example or bitext consists of a source language string paired with a target language stringin our experiments the bitexts are transcriptions of spoken english utterances paired with their translations into spanish or japaneseit is worth emphasizing that we do not necessarily expect the dependency representations produced by the training method to be traditional dependency structures for the two languagesinstead the aim is to produce bilingual dependency representations that are appropriate to performing the translation task for a specific language pair or specific bilingual corpusfor example headwords in both languages are chosen to force a synchronized alignment in order to simplify cases involving socalled headswitchingthis contrasts with one of the traditional approaches to posing the translation problem ie the approach in which translation problems are seen in terms of bridging the gap between the most natural monolingual representations underlying the sentences of each languagethe training method has four stages compute cooccurrence statistics from the training data search for an optimal synchronized hierarchical alignment for each bitext construct a set of head transducers that can generate these alignments with transition weights derived from maximum likelihood estimationfor each source word w in the data set assign a cost the translation pairing cost c for all possible translations v into the target languagethese translations of the source word may be zero one or several target language words the assignment of translation pairing costs may be done using various statistical measuresfor this purpose a suitable statistical function needs to indicate the strength of cooccurrence correlation between source and target words which we assume is indicative of carrying the same semantic contentour preferred choice of statistical measure for assigning the costs is the 0 correlation measure we apply this statistic to cooccurrence of the source word with all its possible translations in the data set exampleswe have found that at least for our data this measure leads to better performance than the use of the log probabilities of target words given source words in addition to the correlation measure the cost for a pairing includes a distance measure component that penalizes pairings proportionately to the difference between the positions of the source and target words in their respective sentencesas noted earlier dependency transduction models are generative probabilistic models each derivation generates a pair of dependency treessuch a pair can be represented as a synchronized hierarchical alignment of two stringsa hierarchical alignment consists of four functionsthe first two functions are an alignment mapping f from source words w to target words f and an inverse alignment mapping from target words v to source words f the inverse mapping is needed to handle mapping of target words to c it coincides with f for pairs without source the other two functions are a source headmap g mapping source dependent words w to their heads g in the source string and a target headmap h mapping target dependent words v to their headwords h in the target stringan a hierarchical alignment alignment mappings f and f and headmaps g and h example hierarchical alignment is shown in figure 6 a hierarchical alignment is synchronized if these conditions hold nonoverlap if w1 w2 then f f and similarly if 01 0 02 then f synchronization if f v and v then f h and f w similarly if f w and w e then f g and f v phrase contiguity the image under f of the maximal substring dominated by a headword w is a contiguous segment of the target stringwe hope that the context of discussion will make the typetoken distinction clear in the rest of this articlethe hierarchical alignment in figure 6 is synchronizedof course translations of phrases are not always transparently related by a hierarchical alignmentin cases where the mapping between a source and target phrase is unclear then the most reasonable choice of hierarchical alignment may be for f and f to link the heads of the phrases only all the other words being mapped to e with no constraints on the monolingual head mappings h and g in the hierarchical alignments produced by the training method described here the source and target strings of a bitext are decomposed into three aligned regions as shown in figure 7 a head region consisting of headword w in the source and its corresponding target f in the target string a left substring region consisting of the source substring to the left of w and its projection under f on the target string and a right substring region consisting of the source substring to the right of w and its projection under f on the target stringthe decomposition is recursive in that the left substring region is decomposed around a left headword w1 and the right substring region is decomposed around a right headword w this process of decomposition continues for each left and right substring until it only contains a single wordfor each bitext there are in general multiple such recursive decompositions that satisfy the synchronization constraints for hierarchical alignmentswe wish to find such an alignment that respects the cooccurrence statistics of bitexts as well as the phrasal structure implicit in the source and target stringsfor this purpose we define a cost function on hierarchical alignmentsthe cost function is the sum of three termsthe first term is the total of all the translation pairing costs c of each source word w and its translation f in the alignment the second term is proportional to the distance in the source string between dependents wd and their heads g and the third term is proportional to the distance in the target string between target dependent words vd and their heads hthe hierarchical alignment that minimizes this cost function is computed using a dynamic programming procedurein this procedure the pairing costs are first retrieved for each possible sourcetarget pair allowed by the exampleadjacent source substrings are then combined to determine the lowestcost subalignments for successively larger substrings of the bitext satisfying the constraints stated abovethe successively larger substrings eventually span the entire source string yielding the optimal hierarchical alignment for the bitextthis procedure has 0 complexity in the number of words in the source sentencein alshawi and douglas we describe a version of the alignment algorithm in which heads may have an arbitrary number of dependents and in which the hierarchical alignments for the training corpus are refined by iterative reestimationbuilding a head transducer involves creating appropriate head transducer states and tracing hypothesized head transducer transitions between them that are consistent with the hierarchical alignment of a bitextthe main transitions that are traced in our construction are those that map heads w1 and w of the right and left dependent phrases of w to their translations as indicated by the alignment function f in the hierarchical alignmentthe positions of the dependents in the target string are computed by comparing the positions of f and f to the position of v f in order to generalize from instances in the training data some model states arising from different training instances are sharedin particular in the construction described here for a given pair there is only one final stateto specify states and transitions constructed for the quotswappingquot decomposition shown in figure 7 the sharing of states we make use of a onetoone statenaming function a from sequences of strings to transducer statesthe same statenaming function is used for all examples in the data set ensuring that the transducer fragments recorded for the entire data set will form a complete collection of head transducer transition networksfigure 7 shows a decomposition in which w has a dependent to either side v has both dependents to the right and the alignment is quotswappingquot is to the right of f the construction for this decomposition case is illustrated in figure 8 as part of a finitestate transition diagram and described in more detail below pairingsother cases covered by our algorithm are simple variantsthe detailed construction is as follows if instead the alignment had been as in figure 9 in which the source dependents are mapped to target dependents in a parallel rather than swapping configuration the construction is the same except for the following differences other states are the same as for the first casethe resulting states and transitions are shown in figure 10after the construction described above is applied to the entire set of aligned hitexts in the training set the counts for transitions are treated as event observation counts of a statistical dependency transduction model with the parameters described in section 31more specifically the negated logs of these parameters are used as the weights for transducer transitionsin the translation application source word w and target word v are generalized so they can be short substrings of the source and target stringsexamples of such multiword pairs are show memuestreme and nonstopsin escalas in figure 6the cost for such pairings still uses the same 0 statistic now taking the observations to be the cooccurrences of the substrings in the training bitextshowever in order that these costs can be comparable to the costs for simple pairings they are multiplied by the number of words in the source substring of the pairingthe use of compounds in pairings does not require any fundamental changes to the hierarchical alignment dynamic programming algorithm which simply produces dependency trees with nodes that may be compoundsin the transducer construction phase of the training method one of the words of a compound is taken to be the primary or quotrealquot headwordan extra chain of transitions is constructed to transduce the other words of compounds if necessary using transitions with epsilon stringsthis compilation means that the transduction algorithm is unaffected by the use of compounds when aligning training data and there is no need for a separate compound identification phase when the transduction algorithm is applied to test datasome results for different choices of substring lengths can be found in alshawi bangalore and douglas in order to reduce the time required to carry out training evaluation experiments we have chosen two simple stringbased evaluation metrics that can be calculated automaticallythese metrics simple accuracy and translation accuracy are used to compare the target string produced by the system against a reference human translation from heldout datasimple accuracy is computed by first finding a transformation of one string into another that minimizes the total weight of insertions deletions and substitutionstranslation accuracy includes transpositions of words as well as insertions deletions and substitutionswe regard the latter metric as more appropriate for evaluation of translation systems because the simple metric would count a transposition as two errors an insertion plus a deletionfor the lowest editdistance transformation between the reference translation and system output if we write i for the number of insertions d for deletions s for substitutions and r for number of words in the reference translation string we can express simple accuracy as simple accuracy 1 rsimilarly if t is the number of transpositions in the lowest weight transformation including transpositions we can express translation accuracy as translation accuracy 1 rsince a transposition corresponds to an insertion and a deletion the values of i and d for translation accuracy will in general be different from i and d in the computation of simple accuracyfor spanish the units for string operations in the evaluation metrics are words whereas for japanese they are japanese charactersthe training and test data for the englishtospanish experiments were taken from a set of transcribed utterances from the air travel information system corpus together with a translation of each utterance to spanishan utterance is typically a single sentence but is sometimes more than one sentence spoken in sequencealignment search and transduction training was carried out only on bitexts with sentences up to length 20 a total of 13966 training bitextsthe test set consisted of 1185 heldout bitexts at all lengthstable 1 shows the word accuracy percentages for the trained model e2s against the original heldout translations at various source sentence lengthsscores are also given for a quotwordforwordquot baseline sww in which each english word is translated by the most highly correlated spanish wordthe training and test data for the englishtojapanese experiments was a set of transcribed utterances of telephone service customers talking to att operatorsthese utterances collected from real customeroperator interactions tend to include fragmented language restarts etcboth training and test partitions were restricted to bitexts with at most 20 english words giving 12226 training bitexts and 3253 heldout test bitextsin the japanese text we introduce quotwordquot boundaries that are convenient length 5 10 15 20 all jww 758780 452504 400454 372428 372428 e2j 892897 740766 686722 664701 664701 for the training processthese word boundaries are parasitic on the word boundaries in the english transcriptions the translators are asked to insert such a word boundary between any two japanese characters that are taken to have arisen from the translation of distinct english wordsthis results in bitexts in which the number of multicharacter japanese quotwordsquot is at most the number of english wordshowever as noted above evaluation of the japanese output is done with japanese characters ie with the japanese text in its natural formattable 2 shows the japanese character accuracy percentages for the trained englishtojapanese model e2j and a baseline model jww which gives each english word its most highly correlated translationthe vocabularies in these englishspanish and englishjapanese experiments are only a few thousand words the utterances are fairly short and often contain errors typical of spoken languageso while the domains may be representative of taskoriented dialogue settings further experimentation would be needed to assess the effectiveness of our method in situations such as translating newspaper articlesin terms of the training data required tsukada et al provide indirect empirical evidence suggesting accuracy can be further improved by increasing the size of our training sets though also suggesting that the learning curve is relatively shallow beyond the current size of corpusformalisms for finitestate and contextfree transduction have a long history and such formalisms have been applied to the machine translation problem both in the finitestate case and the contextfree case in this paper we have added to this line of research by providing a method for automatically constructing fully lexicalized statistical dependency transduction models from training examplesautomatically training a translation system brings important benefits in terms of maintainability robustness and reducing expert coding effort as compared with traditional rulebased translation systems the reduction of effort results in large part from being able to do without artificial intermediate representations of meaning we do not require the development of semantic mapping rules or the creation of a corpus including semantic annotationscompared with lefttoright transduction middleout transduction also aids robustness because when complete derivations are not available partial derivations tend to have meaningful headwordsat the same time we believe our method has advantages over the approach developed initially at ibm for training translation systems automaticallyone advantage is that our method attempts to model the natural decomposition of sentences into phrasesanother is that the compilation of this decomposition into lexically anchored finitestate head transducers produces implementations that are much more efficient than those for the ibm modelin particular our search algorithm finds optimal transductions of test sentences in less than quotreal timequot on a 300mhz processor that is the time to translate an utterance is less than the time taken to speak it an important consideration for our speech translation application
J00-1004
learning dependency translation models as collections of finitestate head transducersthe paper defines weighted head transducers finitestate machines that perform middleoutstring transductionthese transducers are strictly more expressive than the special case of standard lefttoright finitestate transducersdependency transduction models are then defined as collections of weighted head transducers that are applied hierarchicallya dynamic programming search algorithm is described for finding the optimal transduction of an input string with respect to a dependency transduction modela method for automatically training a dependency transduction model from a set of inputoutput example strings is presentedthe method first searches for hierarchical alignments of the training examples guided by correlation statistics and then constructs the transitions of head transducers that are consistent with these alignmentsexperimental results are given for applying the training method to translation from english to spanish and japanesewe treat translation as a process of simultaneous induction of source and target dependency trees using head transductionwe present a twolevel arranged word ordering and chunk ordering by a hierarchically organized collection of finite state transducerswe induce parallel tree structures from unbracketed parallel text modeling the generation of each node children with a finitestate transducer
models of translational equivalence among words parallel texts have properties that distinguish them from other kinds of parallel data first most words translate to only one other word second bitext correspondence is typically only partialmany words in each text have no clear equivalent in the other text this article presents methods for biasing statistical translation models to reflect these properties evaluation with respect to independent human judgments has confirmed that translation models biased in this fashion are significantly more accurate than a baseline knowledgefree model this article also shows how a statistical translation model can take advantage of preexisting knowledge that might be available about particular language pairs even the simplest kinds of languagespecific knowledge such as the distinction between content words and function words are shown to reliably boost translation model performance on some tasks statistical models that reflect knowledge about the model domain combine the best of both the rationalist and empiricist paradigms the idea of a computer system for translating from one language to another is almost as old as the idea of computer systems warren weaver wrote about mechanical translation as early as 1949 more recently brown et al suggested that it may be possible to construct machine translation systems automatically instead of codifying the human translation process from introspection brown and his colleagues proposed machine learning techniques to induce models of the process from examples of its input and output the proposal generated much excitement because it held the promise of automating a task that forty years of research have proven very laborintensive and errorprone yet very few other researchers have taken up the because partly because brown et al approach was quite a departure from the paradigm in vogue at the time brown et al built statistical models of equivalence models short in the context of computational linguistics translational equivalence is a relation that holds between two expressions with the same meaning where the two expressions are in different languages empirical estimation statistical translation models is typically based on texts of texts that are translations of each other as with all statistical models the best translation models are those whose parameters correspond best with the sources of variance in the data probabilistic translation models whose parameters reflect universal properties of translational equivalence andor existing knowledge about particular d166f 610 opperman drive eagan mn 55123 email danmelamedtwestgroupcom 1 the term translation model which is standard in the literature refers to a mathematical relationship two data sets hi this context the term implies nothing about the translation between natural languages automated or otherwise 2000 association for computational linguistics computational linguistics volume 26 number 2 languages and language pairs benefit from the best of both the empiricist and rationalist traditions this article presents three such models along with methods for efficiently estimating their parameters each new method is designed to account for an additional universal property of translational equivalence in bitexts 1 most word tokens translate to only one word token i approximate this tendency with a onetoone assumption 2 most text segments are not translated wordforword i build an explicit noise model 3 different linguistic objects have statistically different behavior in translation i show a way to condition translation models on different word classes to help account for the variety quantitative evaluation with respect to independent human judgments has shown that each of these three estimation biases significantly improves translation model accuracy over a baseline knowledgefree model however these biases will not produce the best possible translation models by themselves anyone attempting to build an optimal translation model should infuse it with all available knowledge sources including syntactic dictionary and cognate information my goal here is only to demonstrate the value of some previously unused kinds of information that are always available for translation modeling and to show how these information sources can be integrated with others parallel texts have properties that distinguish them from other kinds of parallel datafirst most words translate to only one other wordsecond bitext correspondence is typically only partialmany words in each text have no clear equivalent in the other textthis article presents methods for biasing statistical translation models to reflect these propertiesevaluation with respect to independent human judgments has confirmed that translation models biased in this fashion are significantly more accurate than a baseline knowledgefree modelthis article also shows how a statistical translation model can take advantage of preexisting knowledge that might be available about particular language pairseven the simplest kinds of languagespecific knowledge such as the distinction between content words and function words are shown to reliably boost translation model performance on some tasksstatistical models that reflect knowledge about the model domain combine the best of both the rationalist and empiricist paradigmsthe idea of a computer system for translating from one language to another is almost as old as the idea of computer systemswarren weaver wrote about mechanical translation as early as 1949more recently brown et al suggested that it may be possible to construct machine translation systems automaticallyinstead of codifying the human translation process from introspection brown and his colleagues proposed machine learning techniques to induce models of the process from examples of its input and outputthe proposal generated much excitement because it held the promise of automating a task that forty years of research have proven very laborintensive and errorproneyet very few other researchers have taken up the because partly because brown et al approach was quite a departure from the paradigm in vogue at the timeformally brown et al built statistical models of translational equivalence in the context of computational linguistics translational equivalence is a relation that holds between two expressions with the same meaning where the two expressions are in different languagesempirical estimation of statistical translation models is typically based on parallel texts or bitextspairs of texts that are translations of each otheras with all statistical models the best translation models are those whose parameters correspond best with the sources of variance in the dataprobabilistic translation models whose parameters reflect universal properties of translational equivalence andor existing knowledge about particular languages and language pairs benefit from the best of both the empiricist and rationalist traditionsthis article presents three such models along with methods for efficiently estimating their parameterseach new method is designed to account for an additional universal property of translational equivalence in bitexts quantitative evaluation with respect to independent human judgments has shown that each of these three estimation biases significantly improves translation model accuracy over a baseline knowledgefree modelhowever these biases will not produce the best possible translation models by themselvesanyone attempting to build an optimal translation model should infuse it with all available knowledge sources including syntactic dictionary and cognate informationmy goal here is only to demonstrate the value of some previously unused kinds of information that are always available for translation modeling and to show how these information sources can be integrated with othersa review of some previously published translation models follows an introduction to translation model taxonomy the core of the article is a presentation of the model estimation biases described abovethe last section reports the results of experiments designed to evaluate these innovationsthroughout this article i shall use cageigraphic letters to denote entire text corpora and other sets of sets capital letters to denote collections including sequences and bags and italics for scalar variablesi shall also distinguish between types and tokens by using bold font for the former and plain font for the latterthere are two kinds of applications of translation models those where word order plays a crucial role and those where it does notempirically estimated models of translational equivalence among word types can play a central role in both kinds of applicationsapplications where word order is not essential include for these applications empirically estimated models have a number of advantages over handcrafted models such as online versions of bilingual dictionariestwo of the advantages are the possibility of better coverage and the possibility of frequent updates by nonexpert users to keep up with rapidly evolving vocabulariesa third advantage is that statistical models can provide more accurate information about the relative importance of different translationssuch information is crucial for applications such as crosslanguage information retrieval in the vector space approach to cur the query vector q is in a different language from the document vectors d a wordtoword translation model t can map q into a vector q in the vector space of d in order for the mapping to be accurate t must be able to encode many levels of relative importance among the possible translations of each element of qa typical bilingual dictionary says only what the possible translations are which is equivalent to positing a uniform translational distributionthe performance of crosslanguage information retrieval with a uniform t is likely to be limited in the same way as the performance of conventional information retrieval without termfrequency information ie where the system knows which terms occur in which documents but not how often applications where word order is crucial include speech transcription for translation bootstrapping of ocr systems for new languages interactive translation and fully automatic highquality machine translation in such applications a wordtoword translation model can serve as an independent module in a more complex sequencetosequence translation modelthe independence of such a module is desirable for two reasons one practical and one philosophicalthe practical reason is illustrated in this article orderindependent translation models can be accurately estimated more efficiently in isolationthe philosophical reason is that words are an important epistemological category in our naive mental representations of languagewe have many intuitions about what words are and how they behavewe can bring these intuitions to bear on our translation models without being distracted by other facets of language such as phrase structurefor example the translation models presented in the last two chapters of melamed capture the intuitions that words can have multiple senses and that spaces in text do not necessarily delimit wordsthe independence of a wordtoword translation module in a sequencetosequence translation model can be effected by a twostage decompositionthe first stage is based on the observation that every sequence l is just an ordered bag and that the bag b can be modeled independently of its order 0for example the sequence consists of the bag c a b and the ordering relation 1if we represent each sequence l as a pair then 2 quotsentencetosentencequot might be a more transparent term than quotsequencetosequencequot but all the models that i am aware of apply equally well to sequences of words that are not sentencesnow let li and l2 be two sequences and let a be a onetoone mapping between the elements of l1 and the elements of l2borrowing a term from the operations research literature i shall refer to such mappings as assignmentslet a be the set of all possible assignments between l1 and l2using assignments we can decompose conditional and joint probabilities over sequences the second stage of decomposition takes us from bags of words to the words that they containthe following bag pair generation process illustrates how a wordtoword translation model can be embedded in a bagtobag translation model for languages li and 2 from li x according to the distribution trans to lexicalize the concept in the two languagessome concepts are not lexicalized in some languages so one of it and v may be emptya pair of bags containing m and n nonempty word sequences can be generated by a process where is anywhere between 1 and m n for notational convenience the elements of the two bags can be labeled so that bi 10 and b2 irlt vi where some of the ti and v may be emptythe elements of an assignment then are pairs of bag element labels a where each i ranges over di each ranges over vi v1 each i is distinct and each j is distinctthe label pairs in a given assignment can be generated in any order so there are 1 ways to generate an assignment of size 16 it follows that the probability of generating a pair of bags with a particular assignment a of size 1 is the above equation holds regardless of how we represent conceptsthere are many plausible representations such as pairs of trees from synchronous tree adjoining grammars lexical conceptual structures and wordnet synsets of course for a representation to be used a method must exist for estimating its distribution in dataa useful representation will reduce the entropy of the trans distribution which is conditioned on the concept distribution as shown in equation 10this topic is beyond the scope of this article howeveri mention it only to show how the models presented here may be used as building blocks for models that are more psycholinguistically sophisticatedto make the translation model estimation methods presented here as general as possible i shall assume a totally uninformative concept representationthe trans distribution itselfin other words i shall assume that each different pair of word sequence types is deterministically generated from a different concept so that trans is zero for all concepts except onenow a bagtobag translation model can be fully specified by the distributions of 1 and transthe probability distribution trans is a wordtoword translation modelunlike the models proposed by brown et al this model is symmetric because both word bags are generated together from a joint probability distributionbrown and his colleagues models reviewed in section 43 generate one half of the bitext given the other half so they are represented by conditional probability distributionsa sequencetosequence translation model can be obtained from a wordtoword translation model by combining equation 11 with order information as in equation 8the most general wordtoword translation model trans where ii and range over sequences in li and 2 has an infinite number of parametersthis model can be constrained in various ways to make it more practicalthe models presented in this article are based on the onetoone assumption each word is translated to at most one other wordin these models ü and ii may consist of at most one word eachas before one of the two sequences may be emptyi shall describe empty sequences as consisting of a special null word so that each word sequence will contain exactly one word and can be treated as a scalarhenceforth i shall write you and v instead of ü and under the onetoone assumption a pair of bags containing m and n nonempty words can be generated by a process where the bag size is anywhere between max and m n the onetoone assumption is not as restrictive as it may appear the explanatory power of a model based on this assumption may be raised to an arbitrary level by extending western notions of what words are to include words that contain spaces or several characters for example i have shown elsewhere how to estimate wordtoword translation models where a word can be a noncompositional compound consisting of several spacedelimited tokens for the purposes of this article however words are the tokens generated by my tokenizers and stemmers for the languages in questiontherefore the models in this article are only a first approximation to the vast complexities of translational equivalence between natural languagesthey are intended mainly as stepping stones towards better modelsmost methods for estimating translation models from bitexts start with the following intuition words that are translations of each other are more likely to appear in corresponding bitext regions than other pairs of wordsfollowing this intuition most authors begin by counting the number of times that word types in one half of the bitext cooccur with word types in the other halfdifferent cooccurrence counting methods stem from different models of cooccurrencea model of cooccurrence is a boolean predicate which indicates whether a given pair of word tokens cooccur in corresponding regions of the bitext spacedifferent models of cooccurrence are possible depending on the kind of bitext map that is available the languagespecific information that is available and the assumptions made about the nature of translational equivalenceall the translation models reviewed and introduced in this article can be based on any of the cooccurrence models described by melamed for expository purposes however i shall assume a boundarybased model of cooccurrence throughout this articlea boundarybased model of cooccurrence assumes that both halves of the bitext have been segmented into s segments so that segment you in one half of the bitext and segment v in the other half are mutual translations 1 1 and j 1i argue elsewhere that nods and hoche often cooccur as do nods and headthe direct association between nods and hoche and the direct association between nods and head give rise to an indirect association between hoche and headmany researchers have proposed greedy algorithms for estimating nonprobabilistic wordtoword translation models also known as translation lexicons given a reasonable similarity function the greedy algorithm works remarkably well considering how simple it ishowever the association scores in step 2 are typically computed independently of each otherthe problem with this independence assumption is illustrated in figure 1the two word sequences represent corresponding regions of an englishfrench bitextif nods and hoche cooccur much more often than expected by chance then any reasonable similarity metric will deem them likely to be mutual translationsnods and hoche are indeed mutual translations so their tendency to cooccur is called a direct associationnow suppose that nods and head often cooccur in englishthen hoche and head will also cooccur more often than expected by chancethe dashed arrow between hoche and head in figure 1 represents an indirect association since the association between hoche and head arises only by virtue of the association between each of them and nodsmodels of translational equivalence that are ignorant of indirect associations have quota tendency to be confused by collocatesquot paradoxically the irregularities in text and in translation mitigate the problemif noise in the data reduces the strength of a direct association then the same noise will reduce the strengths of any indirect associations that are based on this direct the two halves of the bitext a pair of aligned text segments in the unigram frequency of you in you the unigram frequency of v in v the number of times that you and v cooccur the probability that a token of you will be translated as a token of v associationon the other hand noise can reduce the strength of an indirect association without affecting any direct associationstherefore direct associations are usually stronger than indirect associationsif all the entries in a translation lexicon are sorted by their association scores the direct associations will be very dense near the top of the list and sparser towards the bottomgale and church have shown that entries at the very top of the list can be over 98 correcttheir algorithm gleaned lexicon entries for about 61 of the word tokens in a sample of 800 english sentencesto obtain 98 precision their algorithm selected only entries for which it had high confidence that the association score was highthese would be the word pairs that cooccur most frequentlya random sample of 800 sentences from the same corpus showed that 61 of the word tokens where the tokens are of the most frequent types represent 45 of all the word typesa similar strategy was employed by wu and xia and by fung fung skimmed off the top 238 of the nounnoun entries in her lexicon to achieve a precision of 716wu and xia have reported automatic acquisition of 6517 lexicon entries from a 33millionword corpus with a precision of 86the first 33 million word tokens in an english corpus from a similar genre contained 33490 different word types suggesting a recall of roughly 19note however that wu and xia chose to weight their precision estimates by the probabilities attached to each entry for example if the translation set for english word detect has the two correct chinese candidates with 0533 probability and with 0277 probability and the incorrect translation with 0190 probability then we count this as 0810 correct translations and 0190 incorrect translations this is a reasonable evaluation method but it is not comparable to methods that simply count each lexicon entry as either right or wrong a weighted precision estimate pays more attention to entries that are more frequent and hence easier to estimatetherefore weighted precision estimates are generally higher than unweighted onesmost probabilistic translation model reestimation algorithms published to date are variations on the theme proposed by brown et al these models involve conditional probabilities but they can be compared to symmetric models if the latter are normalized by the appropriate marginal distributioni shall review these models using the notation in table 1 ploy the expectationmaximization algorithm to estimate the parameters of their model 1on iteration i the them algorithm reestimates the model parameters transi based on their estimates from iteration i 1in model 1 the relationship between the new parameter estimates and the old ones is where z is a normalizing factorit is instructive to consider the form of equation 14 when all the translation probabilities trans for a particular you are initialized to the same constant p as brown et al actually do the initial translation probability trans1 is set proportional to the cooccurrence count of you and v and inversely proportional to the length of each segment you in which you occursthe intuition behind the numerator is central to most bitextbased translation models the more often two words cooccur the more likely they are to be mutual translationsthe intuition behind the denominator is that the cooccurrence count of you and v should be discounted to the degree that v also cooccurs with other words in the same segment pairnow consider how equation 16 would behave if all the text segments on each side were of the same length so that each token of v cooccurs with exactly c words the normalizing coefficient is constant over all wordsthe only difference between equations 16 and 18 is that the former discounts cooccurrences proportionally to the segment lengthswhen information about segment lengths is not available the only information available to initialize model 1 is the cooccurrence countsthis property makes model 1 an appropriate baseline for comparison to more sophisticated models that use other information sources both in the work of brown and his colleagues and in the work described here the true bitext map correlate with the positions of their translationsthe correlation is stronger for language pairs with more similar word orderbrown et al introduced the idea that this correlation can be encoded in translation model parametersdagan church and gale expanded on this idea by replacing brown et al word alignment parameters which were based on absolute word positions in aligned segments with a much smaller set of relative offset parametersthe much smaller number of parameters allowed dagan church and gale model to be effectively trained on much smaller bitextsvogel ney and tillmann have shown how some additional assumptions can turn this model into a hidden markov model enabling even more efficient parameter estimationit cannot be overemphasized that the word order correlation bias is just knowledge about the problem domain which can be used to guide the search for the optimum model parameterstranslational equivalence can be empirically modeled for any pair of languages but some models and model biases work better for some language pairs than for othersthe word order correlation bias is most useful when it has high predictive power ie when the distribution of alignments or offsets has low entropythe entropy of this distribution is indeed relatively low for the language pair that both brown and his colleagues and dagan church and gale were working withfrench and english have very similar word ordera word order correlation bias as well as the phrase structure biases in brown et al models 4 and 5 would be less beneficial with noisier training bitexts or for language pairs with less similar word ordernevertheless one should use all available information sources if one wants to build the best possible translation modelsection 53 suggests a way to add the word order correlation bias to the models presented in this articleat about the same time that i developed the models in this article hiemstra independently developed his own bagtobag model of translational equivalencehis model is also based on a onetoone assumption but it differs from my models in that it allows empty words in only one of the two bags the one representing the shorter sentencethus hiemstra model is similar to the first model in section 5 but it has a little less explanatory powerhiemstra approach also differs from mine in his use of the iterative proportional fitting procedure for parameter estimationthe ipfp is quite sensitive to initial conditions so hiemstra investigated a number of initialization optionschoosing the most advantageous hiemstra has published parts of the translational distributions of certain words induced using both his method and brown et al model 1 from the same training bitextsubjective comparison of these examples suggests that hiemstra method is more accuratehiemstra has also evaluated the recall and precision of his method and of model 1 on a small handconstructed set of link tokens in a particular bitextmodel 1 fared worse on averagethis section describes my methods for estimating the parameters of a symmetric wordtoword translation model from a bitextfor most applications we are interested in estimating the probability trans of jointly generating the pair of words unfortunately these parameters cannot be directly inferred from a training bitext because we do not know which words in one half of the bitext were generated together with which words in the other halfthe observable features of the bitext are only the cooccurrence counts cooc methods for estimating translation parameters from cooccurrence counts typically involve link counts links which represent hypotheses about the number of times that you and v were generated together for each you and v in the bitexta link token is an ordered pair of word tokens one from each half of the bitexta link type is an ordered pair of word typesthe link counts links range over link typeswe can always estimate trans by normalizing link counts so that e trans 1 for estimation purposes it is convenient to also employ a separate set of nonprobabilistic parameters score which represent the chances that you and v can ever be mutual translations ie that there exists some context where tokens you and v are generated from the same conceptthe relationship between score and trans can be more or less direct depending on the model and its estimation methodeach of the models presented below uses a different score formulationall my methods for estimating the translation parameters trans share the following general outline under certain conditions a parameter estimation process of this sort is an instance of the expectationmaximization algorithm as explained below meeting these conditions is computationally too expensive for my modelstherefore i employ some approximations which lack the them algorithm convergence guaranteethe maximum likelihood approach to estimating the unknown parameters is to find the set of parameters 6 that maximize the probability of the training bitext the probability of the bitext is a sum over the distribution a of possible assignments the number of possible assignments grows exponentially with the size of aligned text segments in the bitextdue to the parameter interdependencies introduced by the onetoone assumption we are unlikely to find a method for decomposing the assignments into parameters that can be estimated independently of each other as in brown et al 1993b equation 26barring such a decomposition method the mle approach is infeasiblethis is why we must make do with approximations to the them algorithmin this situation brown et al recommend quotevaluating the expectations using only a single probable alignmentquot the single most probable assignment amax is the maximum a posteriori assignment if we represent the bitext as a bipartite graph and weight the edges by log trans then the righthand side of equation 26 is an instance of the weighted maximum matching problem and amax is its solutionfor a bipartite graph g with v 1v1 you v21 and e el the lowest currently known upper bound on the computational complexity of this problem is 0 although this upper bound is polynomial it is still too expensive for typical bitexts1 subsection 512 describes a greedy approximation to the map approximation511 step 1 initializationalmost every translation model estimation algorithm exploits the wellknown correlation between translation probabilities and cooccurrence countsmany algorithms also normalize the cooccurrence counts cooc by the marginal frequencies of you and v however these quantities account for only the three shaded cells in table 2the statistical interdependence between two word types can be estimated more robustly by considering the whole tablefor example gale and church suggest that quot02 a x2like statistic seems to be a particularly good choice because it makes good use of the offdiagonal cellsquot in the contingency tablein informal experiments described elsewhere i found that the g2 statistic suggested by dunning slightly outperforms 02let the cells of the contingency table be named as follows where b pknk are binomial probabilitiesthe statistic uses maximum likelihood estimates for the probability parameters p1 bf p2 ccd p aabccfd g2 is easy to compute because the binomial coefficients in the numerator and in the denominator cancel each other outall my methods initialize the parameters score to g2 except that any pairing with null is initialized to an infinitesimal valuei have also found it useful to smooth the cooccurrence counts eg using the simple goodturing smoothing method before computing g2512 step 2 estimation of link countsto further reduce the complexity of estimating link counts i employ the competitive linking algorithm which is a greedy approximation to the map approximation bitext linked to nullotherwise link all cooccurring token pairs in the bitext the onetoone assumption implies that linked words cannot be linked againtherefore remove all linked word tokens from their respective halves of the bitextthe competitive linking algorithm can be viewed as a heuristic search for the most likely assignment in the space of all possible assignmentsthe heuristic is that the most likely assignments contain links that are individually the most likelythe search proceeds by a process of eliminationin the first search iteration all the assignments that do not contain the most likely link are discardedin the second iteration all the assignments that do not contain the second most likely link are discarded and so on until only one assignment remainsquot the algorithm greedily selects the most likely links first and then selects less likely links only if they do not conflict with previous selectionsthe probability of a link being rejected increases with the number of links that are selected before it and thus decreases with the link scorein this problem domain the competitive linking algorithm usually finds one of the most likely assignments as i will show in section 6under an appropriate hashing scheme the expected running time of the competitive linking algorithm is linear in the size of the input bitextthe competitive linking algorithm and its onetoone assumption are potent weapons against the everpresent sparse data problemthey enable accurate estimation of translational distributions even for words that occur only once as long as the surrounding words are more frequentin most translation models link scores are correlated with cooccurrence frequencyso links between tokens you and v for which score is highest are the ones for which there is the most evidence and thus also the ones that are easiest to predict correctlywinnertakeall link assignment methods such as the competitive linking algorithm can prevent links based on indirect associations thereby leveraging their accuracy on the more confident links to raise the accuracy of the less confident linksfor example suppose that u1 and u2 cooccur with v1 and v2 in the training data and the model estimates score 05 score 02 and score 01according to the onetoone assumption is an indirect association and the correct translation of 02 is u2to the extent that the onetoone assumption is valid it reduces the probability of spurious links for the rarer wordsthe more incorrect candidate translations can be eliminated for a given rare word the more likely the correct translation is to be foundso the probability of a correct match for a rare word is proportional to the fraction of words around it that can be linked with higher confidencethis fraction is largely determined by two bitext properties the distribution of word frequencies and the distribution of cooccurrence countsmelamed explores these properties in greater depth parameters as the logarithm of the trans parametersthe competitive linking algorithm only cares about the relative magnitudes of the various scorehowever equation 26 is a sum rather than a product so i scale the trans parameters logarithmically to be consistent with its probabilistic interpretation yarowsky has shown that quotfor several definitions of sense and collocation an ambiguous word has only one sense in a given collocation with a probability of 9099quot in other words a single contextual clue can be a highly reliable indicator of a word senseone of the definitions of quotsensequot studied by yarowsky was a word token translation in the other half of a bitextfor example the english word sentence may be considered to have two senses corresponding to its french translations peine and phrase if a token of sentence occurs in the vicinity of a word like jury or prison then it is far more likely to be translated as peine than as phrasequotin the vicinity ofquot is one kind of collocationcooccurrence the ratio links i cooc for several values of cooc in bitext space is another kind of collocationif each word translation is treated as a sense tag then quottranslationalquot collocations have the unique property that the collocate and the word sense are one and the samemethod b exploits this property under the hypothesis that quotone sense per collocationquot holds for translational collocationsthis hypothesis implies that if you and v are possible mutual translations and a token you cooccurs with a token v in the bitext then with very high probability the pair was generated from the same concept and should be linkedto test this hypothesis i ran one iteration of method a on 300000 aligned sentence pairs from the canadian hansards bitexti then plotted the of the competitive linking process because in the first iteration linking decisions are based only on the initial similarity metricinformation about how often words cooccur without being linked can be used to bias the estimation of translation model parametersthe smaller the ratio icionokcsuuquotvv the more likely it is that you and v are not mutual translations and that links posited between tokens of you and v are noisethe bias can be implemented via auxiliary parameters that model the curve illustrated in figure 2the competitive linking algorithm creates all the links of a given type independently of each otherso the distribution of the number links of links connecting word types you and v can be modeled by a binomial distribution with parameters cooc and p p is the probability that you and v will be linked when they cooccurthere is never enough data to robustly estimate each p parameter separatelyinstead i shall model all the p with just two parametersfor you and v that are mutual translations p will average to a relatively high probability which i will call a for you and v that are not mutual translations p will average to a relatively low probability which i will call a a and acorrespond to the two peaks of the distribution which is illustrated in figure 2the two parameters can also be interpreted as the rates of true and false positivesif the translation in the bitext is consistent and the translation model is accurate then a will be close to one and a will be close to zeroto find the most likely values of the auxiliary parameters a and a i adopt the standard method of maximum likelihood estimation and find the values that maximize the probability of the link frequency distributions under the usual independence assumptions table 3 summarizes the variables involved in this auxiliary estimation processthe factors on the righthand side of equation 29 can be written explicitly with the help of a mixture coefficientlet t be the probability that an arbitrary cooccurring pair of word types are mutual translationslet b denote the probability that k links are observed out of n cooccurrences where k has a binomial distribution with parameters n and p then the probability that word types you and v will be linked links times out of cooc cooccurrences is a mixture of two binomials one more variable allows us to express t in terms of a and a let a be the probability that an arbitrary cooccuring pair of word tokens will be linked regardless of whether they are mutual translationssince t is constant over all word types it also represents the probability that an arbitrary cooccurring pair of word tokens are mutual translationstherefore pr as given in equation 29 has only one global maximum in the region of interest where 1 a a 0 and let n be the total number of word token pair cooccurrences equating the righthand sides of equations 31 and 34 and rearranging the terms we get in the preceding equations either you or v can be nullhowever the number of times that a word cooccurs with null is not an observable feature of bitextsto make sense of cooccurrences with null we can view cooccurrences as potential links and cooc as the maximum number of times that tokens of you and v might be linkedfrom this point of view cooc should be set to the unigram frequency of you since each token of you represents one potential link to nullsimilarly for coocthese cooccurrence counts should be summed together with all the others in equation 33the probability function expressed by equations 29 and 30 may have many local maximain practice these local maxima are like pebbles on a mountain invisible at low resolutioni computed equation 29 over various combinations of a and a after one iteration of method a over 300000 aligned sentence pairs from the canadian hansard bitextfigure 3 illustrates that the region of interest in the parameter space where 1 a a a 0 has only one dominant global maximumthis global maximum can be found by standard hillclimbing methods as long as the step size is large enough to avoid getting stuck on the pebblesgiven estimates for a and a we can compute blcooc a and bicooc a for each occurring combination of links and cooc valuesthese are the probabilities that links links were generated out of cooc possible links by a process that generates correct links and by a process that generates incorrect links respectivelythe ratio of these probabilities is the likelihood ratio in favor of the types you and v being possible mutual translations for all you and v method b differs from method a only in its redefinition of the score function in equation 36the auxiliary parameters a and a and the noise model that they represent can be employed the same way in translation models that are not based on the onetoone assumptionin method b the estimation of the auxiliary parameters a and a depends only on the overall distribution of cooccurrence counts and link frequenciesall word pairs that cooccur the same number of times and are linked the same number of times are assigned the same scoremore accurate models can be induced by taking into account various features of the linked tokensfor example frequent words are translated less consistently than rare words to account for these differences we can estimate separate values of a and a for different ranges of coocsimilarly the auxiliary parameters can be conditioned on the linked parts of speecha kind of word order correlation bias can be effected by conditioning the auxiliary parameters on the relative positions of linked word tokens in their respective textsjust as easily we can model link types that coincide with entries in an online bilingual dictionary separately from those that do not when the auxiliary parameters are conditioned on different link classes their optimization is carried out separately for each class b icooc azf scorec log b lcooc section 611 describes the link classes used in the experiments belowthis section compares translation model estimation methods a b and c to each other and to brown et al model 1to reiterate model 1 is based on cooccurrence information only method a is based on the onetoone assumption method b adds the quotone sense per collocationquot hypothesis to method a method c conditions the auxiliary parameters of method b on various word classeswhereas methods a and b and model 1 were fully specified in section 431 and section 5 the latter section described a variety of features on which method c might classify linksfor the purposes of the experiments described in this article method c employed the simple classification in table 4 for both languages in the bitextall classification was performed by table lookup no contextaware partofspeech tagger was usedin particular words that were ambiguous between open classes and closed classes were always deemed to be in the closed classthe only languagespecific knowledge involved in this classification method is the list of function words in class f certainly more sophisticated word classification methods could produce better models but even the simple classification in table 4 should suffice to demonstrate the method potential611 experiment 1until now translation models have been evaluated either subjectively or using relative metrics such as perplexity with respect to other models objective and more accurate tests can be carried out using a quotgold standardquot i hired bilingual annotators to link roughly 16000 corresponding words between online versions of the bible in french and englishthis bitext was selected to facilitate widespread use and standardization the entire bible bitext comprised 29614 verse pairs of which 250 verse pairs were handlinked using a specially developed annotation toolthe annotation style guide was based on the intuitions of the annotators so it was not biased towards any particular translation modelthe annotation was replicated five times by seven different annotatorseach of the four methods was used to estimate a wordtoword translation model from the 29614 verse pairs in the bible bitextall methods were deemed to have converged when less than 0001 of the translational probability distribution changed from one iteration to the nextthe links assigned by each of methods a b and c in the last iteration were normalized into joint probability distributions using equation 19i shall refer to these joint distributions as model a model b and model c respectivelyeach of the joint probability distributions was further normalized into two conditional probability distributions one in each directionsince model 1 is inherently directional its conditional probability distributions were estimated separately in each direction instead of being derived from a joint distributionthe four models predictions were compared to the gold standard annotationseach model guessed one translation for each word on one side of the gold standard bitexttherefore precision recall here and i shall refer to the results simply as quotpercent correctquot the accuracy of each model was averaged over the two directions of translation english to french and french to englishthe fivefold replication of annotations in the test data enabled computation of the statistical significance of the differences in model accuracythe statistical significance of all results in this section was measured at the a 05 level using the wilcoxon signed ranks testalthough the models were evaluated on part of the same bitext on which they were trained the evaluations were with respect to the translational equivalence relation hidden in this bitext not with respect to any of the bitext visible featuressuch testing on training data is standard practice for unsupervised learning algorithms where the objective is to compare several methodsof course performance would degrade on previously unseen datain addition to the different translation models there were two other independent variables in the experiment method of translation and whether function words were includedsome applications such as query translation for cur do not care about function wordsto get a sense of the relative effectiveness of the different translation model estimation methods when function words are taken out of the equation i removed from the gold standard all link tokens where one or both of the linked words were closedclass wordsthen i removed all closedclass words from the models and renormalized the conditional probabilitiesthe method of translation was either singlebest or whole distributionsinglebest translation is the kind that somebody might use to get the gist of a foreignlanguage documentthe input to the task was one side of the gold standard bitextthe output was the model single best guess about the translation of each word in the input together with the input wordin other words each model produced link tokens consisting of input words and their translationsfor some applications it is insufficient to guess only the single most likely translation of each word in the inputthe model is expected to output the whole distribution of possible translations for each input wordthis distribution is then combined with other distributions that are relevant to the applicationfor example for crosslanguage information retrieval the translational distribution can be combined with the distribution of term frequenciesfor statistical machine translation the translational distribution can be decoded with a source language model to predict how the different models might perform in such applications the whole distribution task was to generate a whole set of links from each input word weighted according to the probability assigned by the model to each of the input word translationseach model was tested on this task with and without function wordsthe mean results are plotted in figures 4 and 5 with 95 confidence intervalsall four graphs in these figures are on the same scale to facilitate comparisonon both tasks involving the entire vocabulary each of the biases presented in this article improves the efficiency of modeling the available training datawhen closedclass words were ignored model 1 performed better than method a because openclass words are more likely to violate the onetoone assumptionhowever the explicit noise model in methods b and c boosted their scores significantly higher than model 1 and method amethod b was better than method c at choosing the single best openclass links and the situation was reversed for the whole distribution of openclass linkshowever the differences in performance between these two methods were tiny on the openclass tasks because they left only two classes for method c to distinguish content words and nullsmost of the scores on the whole distribution task were lower than their counterparts on the singlebest translation task because it is more difficult for any statistical method to correctly model the less common translationsthe quotbestquot translations are usually the most common612 experiment 2to study how the benefits of the various biases vary with training corpus size i evaluated models a b c and 1 on the whole distribution translation task after training them on three differentsize subsets of the bible bitextthe first subset consisted of only the 250 verse pairs in the gold standardthe second subset included these 250 plus another random sample of 2250 for a total of 2500 an order of magnitude larger than the first subsetthe third subset contained all 29614 verse pairs in the bible bitext roughly an order of magnitude larger than the second subsetall models were compared to the five gold standard annotations and the scores were averaged over the two directions of translation as beforeagain because the total probability assigned to all translations for each source word was one precision recall percent correct on this taskthe mean scores over the five gold standard annotations are graphed in figure 6 where the right edge of the figure corresponds to the means of figure 5the figure supports the hypothesis in melamed that the biases presented in this article are even more valuable when the training data are more sparsethe onetoone assumption is useful even though it forces us to use a greedy approximation to maximum likelihoodin relative terms the advantage of the onetoone assumption is much more pronounced on smaller training setsfor example model a is 102 more accurate than model 1 when trained on only 250 verse pairsthe explicit noise model buys a considerable gain in accuracy across all sizes of training data as do the link classes of model c in concert when trained and tested only on the gold standard test set the three biases outperformed model 1 by up to 125this difference is even more significant given the absolute performance ceiling of 82 established by the interannotator agreement rates on the gold standard62 evaluation at the type level an important application of statistical translation models is to help lexicographers compile bilingual dictionariesdictionaries are written to answer the question quotwhat are the possible translations of xquot this is a question about link types rather than about link tokensevaluation by link type is a thorny issuehuman judges often disagree about the degree to which context should play a role in judgments of translational equivalencefor example the harpercollins french dictionary gives the following french translations for english appoint nommer engager fixer designerlikewise most distribution of link type scoresthe long plateaus correspond to the most common combinations of links 1122 and 33 cooc lay judges would not consider instituer a correct french translation of appointin actual translations however when the object of the verb is commission task force panel etc english appoint is usually translated into french as instituerto account for this kind of contextdependent translational equivalence link types must be evaluated with respect to the bitext whence they were inducedi performed a post hoc evaluation of the link types produced by an earlier version of method b the bitext used for this evaluation was the same aligned hansards bitext used by gale and church except that i used only 300000 aligned segment pairs to save timethe bitext was automatically pretokenized to delimit punctuation english possessive pronouns and french elisionsmorphological variants in both halves of the bitext were stemmed to a canonical formthe link types assigned by the converged model were sorted by the scores in equation 36figure 7 shows the distribution of these scores on a log scalethe log scale helps to illustrate the plateaus in the curvethe longest plateau represents the set of word pairs that were linked once out of one cooccurrence in the bitextall these word pairs were equally likely to be correctthe secondlongest plateau resulted from word pairs that were linked twice out of two cooccurrences and the third longest plateau is from word pairs that were linked three times out of three cooccurrences as usual the entries with higher scores were more likely to be correctby discarding entries with lower scores coverage could be traded for accuracythis tradeoff was measured at three points representing cutoffs at the end of each of the three longest plateausthe traditional method of measuring coverage requires knowledge of the correct link types which is impossible to determine without a gold standardan approximate coverage measure can be based on the number of different words in the corpusfor lexicons extracted from corpora perfect coverage implies at least one entry containing each word in the corpusonesided variants which consider only source words have also been used table 5 shows both the marginal and the combined coverage at each of the three cutoff pointsit also shows the absolute number of entries in each of the three lexiconsof course the size of automatically induced lexicons depends on the size of the training bitexttable 5 shows that given a sufficiently large bitext the method can automatically construct translation lexicons with as many entries as published bilingual dictionariesthe next task was to measure accuracyit would have taken too long to evaluate every lexicon entry manuallyinstead i took five random samples of 100 entries each from each of the three lexiconseach of the samples was first compared to a translation lexicon extracted from a machinereadable bilingual dictionary all the entries in the sample that appeared in the dictionary were assumed to be correcti checked the remaining entries in all the samples by handto account for contextdependent translational equivalence i evaluated the accuracy of the translation lexicons in the context of the bitext whence they were extracted using a simple bilingual concordancera lexicon entry was considered correct if you and v ever appeared as direct translations of each other in an aligned segment pairthat is a link type was considered correct if any of its tokens were correctdirect translations come in different flavorsmost entries that i checked by hand were of the plain vanilla variety that you might find in a bilingual dictionary however a significant number of words translated into a different part of speech for instance in the entry the english word is a noun but the french word is an adjectivethis entry appeared because to have protection is often translated as etre protégé in the bitextthe entry will never occur in a bilingual dictionary but users of translation lexicons be they human or machine will want to know that translations often happen this waythe evaluation of translation models at the word type level is complicated by the possibility of phrasal translations such as immediatement 4 right awayall the methods being evaluated here produce models of translational equivalence between individual words onlyhow can we decide whether a singleword translation quotmatchesquot a phrasal translationthe answer lies in the observation that corpusbased lexicography usually involves a lexicographerbilingual lexicographers can work with bilingual concordancing software that can point them to instances of any link type induced from a bitext and display these instances sorted by their contexts given an incomplete link type the lexicographer can usually reconstruct the complete link type from the contexts in the concordancefor example if the model proposes an equivalence between irnmediatement and right a bilingual concordance can show the lexicographer that the model was really trying to capture the equivalence between immediatement and right away or between immediatement and right nowi counted incomplete entries in a third category whether links in this category should be considered correct depends on the applicationtable 6 shows the distribution of correct lexicon entries among the types v p and ifigure 8 graphs the accuracy of the method against coverage with 95 confidence intervalsthe upper curve represents accuracy when incomplete links are considered correct and the lower when they are considered incorrecton the former metric the method can generate translation lexicons with accuracy and coverage both exceeding 90 as well as dictionarysize translation lexicons that are over 99 correctthere are many ways to model translational equivalence and many ways to estimate translation modelsquotthe mathematics of statistical machine translationquot proposed by brown et al are just one kind of mathematics for one kind of statistical translationin this article i have proposed and evaluated new kinds of translation model biases alternative parameter estimation strategies and techniques for exploiting preexisting knowledge that may be available about particular languages and language pairson a variety of evaluation metrics each infusion of knowledge about the problem domain resulted in better translation modelseach innovation presented here opens the way for more researchmodel biases can be mixed and matched with each other with previously published biases like the word order correlation bias and with other biases yet to be inventedthe competitive linking algorithm can be generalized in various waysnew kinds of preexisting knowledge can be exploited to improve accuracy for particular language pairs or even just for particular bitextsit is difficult to say where the greatest advances will come fromyet one thing is clear from our current vantage point research on empirical methods for modeling translational equivalence has not run out of steam as some have claimed but has only just begunmuch of this research was performed at the department of computer and information science at the university of pennsylvania where it was supported by an equipment grant from sun microsystems laboratories and by arpa contract n6600194c6043many thanks to my former colleagues at upenn and to the anonymous reviewers for their insightful suggestions for improvement
J00-2004
models of translational equivalence among wordsparallel texts have properties that distinguish them from other kinds of parallel datafirst most words translate to only one other wordsecond bitext correspondence is typically only partial many words in each text have no clear equivalent in the other textthis article presents methods for biasing statistical translation models to reflect these propertiesevaluation with respect to independent human judgments has confirmed that translation models biased in this fashion are significantly more accurate than a baseline knowledgefree modelthis article also shows how a statistical translation model can take advantage of preexisting knowledge that might be available about particular language pairseven the simplest kinds of languagespecific knowledge such as the distinction between content words and function words are shown to reliably boost translation model performance on some tasksstatistical models that reflect knowledge about the model domain combine the best of both the rationalist and empiricist paradigmswe measure the orthographic similarity using longest common subsequence ratio we define a direct association as an association between two words where the two words are indeed mutual translationswe propose competitive linking algorithm to align the words to construct confusion networkwe use competitive linking to greedily construct matchings where the pair score is a measure of wordtoword associationwe argue that there are ways to determine the boundaries of some multiwords phrases allowing to treat several words as a single token
dialogue act modeling for automatic tagging and recognition of conversational speech so do you go to college right now are yo yeah it is my last year laughter you are a so you are a senior now yeah i am working on my projects trying to graduate laughter oh good for you yeah that is great um is is n c university is that uh state what did you say bbn technologies we describe a statistical approach for modeling dialogue acts in conversational speech ie speechactlike units such as statement question backchannel agreement disagreement and apologyour model detects and predicts dialogue acts based on lexical collocational and prosodic cues as well as on the discourse coherence of the dialogue act sequencethe dialogue model is based on treating the discourse structure of a conversation as a hidden markov model and the individual dialogue acts as observations emanating from the model statesconstraints on the likely sequence of dialogue acts are modeled via a dialogue act ngramthe statistical dialogue grammar is combined with word ngrams decision trees and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue actwe develop a probabilistic integration of speech recognition with dialogue modeling to improve both speech recognition and dialogue act classification accuracymodels are trained and evaluated using a large handlabeled database of 1155 conversations from the switchboard corpus of spontaneous humantohuman telephone speechwe achieved good dialogue act labeling accuracy and a small reduction in word recognition errorutterance so do you go to college right noware yo yeah it is my last year laughteryou are a so you are a senior nowyeah i am working on my projects trying to graduate laughteroh good for youyeahthat is great um is is n c university is that uh state nc statewhat did you saync statethe ability to model and automatically detect discourse structure is an important step toward understanding spontaneous dialoguewhile there is hardly consensus on exactly how discourse structure should be described some agreement exists that a useful first level of analysis involves the identification of dialogue acts a da represents the meaning of an utterance at the level of illocutionary force thus a da is approximately the equivalent of the speech act of searle the conversational game move of power or the adjacency pair part of schegloff and saks schegloff and jefferson table 1 shows a sample of the kind of discourse structure in which we are interestedeach utterance is assigned a unique da label drawn from a welldefined set thus das can be thought of as a tag set that classifies utterances according to a combination of pragmatic semantic and syntactic criteriathe computational community has usually defined these da categories so as to be relevant to a particular application although efforts are under way to develop da labeling systems that are domainindependent such as the discourse resource initiative damsl architecture while not constituting dialogue understanding in any deep sense da tagging seems clearly useful to a range of applicationsfor example a meeting summarizer needs to keep track of who said what to whom and a conversational agent needs to know whether it was asked a question or ordered to do somethingin related work das are used as a first processing step to infer dialogue games a slightly higher level unit that comprises a small number of dasinteractional dominance might be measured more accurately using da distributions than with simpler techniques and could serve as an indicator of the type or genre of discourse at handin all these cases da labels would enrich the available input for higherlevel processing of the spoken wordsanother important role of da information could be feedback to lowerlevel processingfor example a speech recognizer could be constrained by expectations of likely das in a given context constraining the potential recognition hypotheses so as to improve accuracythe 42 dialogue act labelsda frequencies are given as percentages of the total number of utterances in the overall corpusthe goal of this article is twofold on the one hand we aim to present a comprehensive framework for modeling and automatic classification of das founded on wellknown statistical methodsin doing so we will pull together previous approaches as well as new ideasfor example our model draws on the use of da ngrams and the hidden markov models of conversation present in earlier work such as nagata and morimoto and woszczyna and waibel however our framework generalizes earlier models giving us a clean probabilistic approach for performing da classification from unreliable words and nonlexical evidencefor the speech recognition task our framework provides a mathematically principled way to condition the speech recognizer on conversation context through dialogue structure as well as on nonlexical information correlated with da identitywe will present methods in a domainindependent framework that for the most part treats da labels as an arbitrary formal tag setthroughout the presentation we will highlight the simplifications and assumptions made to achieve tractable models and point out how they might fall short of realitysecond we present results obtained with this approach on a large widely available corpus of spontaneous conversational speechthese results besides validating the methods described are of interest for several reasonsfor example unlike in most previous work on da labeling the corpus is not taskoriented in nature and the amount of data used exceeds that in previous studies by at least an order of magnitude to keep the presentation interesting and concrete we will alternate between the description of general methods and empirical resultssection 2 describes the task and our data in detailsection 3 presents the probabilistic modeling framework a central component of this framework the discourse grammar is further discussed in section 4in section 5 we describe experiments for da classificationsection 6 shows how da models can be used to benefit speech recognitionprior and related work is summarized in section 7further issues and open problems are addressed in section 8 followed by concluding remarks in section 9the domain we chose to model is the switchboard corpus of humanhuman conversational telephone speech distributed by the linguistic data consortiumeach conversation involved two randomly selected strangers who had been charged with talking informally about one of several selfselected generalinterest topicsto train our statistical models on this corpus we combined an extensive effort in human handcoding of das for each utterance with a variety of automatic and semiautomatic toolsour data consisted of a substantial portion of the switchboard waveforms and corresponding transcripts totaling 1155 conversationsbefore handlabeling each utterance in the corpus with a da we needed to choose an utterance segmentation as the raw switchboard data is not segmented in a linguistically consistent wayto expedite the da labeling task and remain consistent with other switchboardbased research efforts we made use of a version of the corpus that had been handsegmented into sentencelevel units prior to our own work and independently of our da labeling system we refer to the units of this segmentation as utterancesthe relation between utterances and speaker turns is not onetoone a single turn can contain multiple utterances and utterances can span more than one turn each utterance unit was identified with one da and was annotated with a single da labelthe da labeling system had special provisions for rare cases where utterances seemed to combine aspects of several da typesautomatic segmentation of spontaneous speech is an open research problem in its own right a rough idea of the difficulty of the segmentation problem on this corpus and using the same definition of utterance units can be derived from a recent study in an automatic labeling of word boundaries as either utterance or nonboundaries using a combination of lexical and prosodic cues we obtained 96 accuracy based on correct word transcripts and 78 accuracy with automatically recognized wordsthe fact that the segmentation and labeling tasks are interdependent further complicates the problembased on these considerations we decided not to confound the da classification task with the additional problems introduced by automatic segmentation and assumed the utterancelevel segmentations as givenan important consequence of this decision is that we can expect utterance length and acoustic properties at utterance boundaries to be accurate both of which turn out to be important features of das we chose to follow a recent standard for shallow discourse structure annotation the dialog act markup in several layers tag set which was designed by the natural language processing community under the auspices of the discourse resource initiative we began with the damsl markup system but modified it in several ways to make it more relevant to our corpus and taskdamsl aims to provide a domainindependent framework for dialogue annotation as reflected by the fact that our tag set can be mapped back to damsl categories however our labeling effort also showed that content and taskrelated distinctions will always play an important role in effective da labelingthe switchboard domain itself is essentially quottaskfreequot thus giving few external constraints on the definition of da categoriesour primary purpose in adapting the tag set was to enable computational da modeling for conversational speech with possible improvements to conversational speech recognitionbecause of the lack of a specific task we decided to label categories that seemed inherently interesting linguistically and that could be identified reliablyalso the focus on conversational speech recognition led to a certain bias toward categories that were lexically or syntactically distinct while the modeling techniques described in this paper are formally independent of the corpus and the choice of tag set their success on any particular task will of course crucially depend on these factorsfor different tasks not all the techniques used in this study might prove useful and others could be of greater importancehowever we believe that this study represents a fairly comprehensive application of technology in this area and can serve as a point of departure and reference for other workthe resulting swbddamsl tag set was multidimensional approximately 50 basic tags could each be combined with diacritics indicating orthogonal information for example about whether or not the dialogue function of the utterance was related to taskmanagement and communicationmanagementapproximately 220 of the many possible unique combinations of these codes were used by the coders to obtain a system with somewhat higher interlabeler agreement as well as enough data per class for statistical modeling purposes a less finegrained tag set was devisedthis tag set distinguishes 42 mutually exclusive utterance types and was used for the experiments reported heretable 2 shows the 42 categories with examples and relative frequencieswhile some of the original infrequent classes were collapsed the resulting da type distribution is still highly skewedthis occurs largely because there was no basis for subdividing the dominant da categories according to taskindependent and reliable criteriathe tag set incorporates both traditional sociolinguistic and discoursetheoretic notions such as rhetorical relations and adjacency pairs as well as some more formbased labelsfurthermore the tag set is structured so as to allow labelers to annotate a switchboard conversation from transcripts alone in about 30 minuteswithout these constraints the da labels might have included some finer distinctions but we felt that this drawback was balanced by the ability to cover a large amount of datalabeling was carried out in a threemonth period in 1997 by eight linguistics graduate students at cu boulderinterlabeler agreement for the 42label tag set used here was 84 resulting in a kappa statistic of 080the kappa statistic measures agreement normalized for chance as argued in carletta kappa values of 08 or higher are desirable for detecting associations between several coded variables we were thus satisfied with the level of agreement achieveda total of 1155 switchboard conversations were labeled comprising 205000 utterances and 14 million wordsthe data was partitioned into a training set of 1115 conversations used for estimating the various components of our model and a test set of 19 conversations remaining conversations were set aside for future use the more frequent da types are briefly characterized belowas discussed above the focus of this paper is not on the nature of das but on the computational framework for their recognition full details of the da tag set and numerous motivating examples can be found in a separate report statements and opinionsthe most common types of utterances were statements and opinionsthis split distinguishes quotdescriptive narrative or personalquot statements from quototherdirected opinion statementsquot the distinction was designed to capture the different kinds of responses we saw to opinions and to statements well we have a cat um he is probably oh a good two years old big old fat and sassy tabbyhe is about five months old well rabbits are darlingi think it would be kind of stressful2 the effect of lacking acoustic information on labeling accuracy was assessed by relabeling a subset of the data with listening and was found to be fairly small a conservative estimate based on the relabeling study is that for most da types at most 2 of the labels might have changed based on listeningthe only da types with higher uncertainty were backchannels and agreements which are easily confused with each other without acoustic cues here the rate of change was no more than 10opinions often include such hedges as i think i believe it seems and i meanwe combined the statement and opinion classes for other studies on dimensions in which they did not differ questionsquestions were of several typesthe yesnoquestion label includes only utterances having both the pragmatic force of a yesnoquestion and the syntactic markings of a yesnoquestion declarativequestions are utterances that function pragmatically as questions but do not have quotquestion formquot by this we mean that declarative questions normally have no whword as the argument of the verb and have quotdeclarativequot word order in which the subject precedes the verbsee weber for a survey of declarative questions and their various realizationsdo you have to have any special trainingbut that does not eliminate it does ituh i guess a year ago you are probably watching cnna lot rightso you are taking a government coursewell how old are youbackchannelsa backchannel is a short utterance that plays discoursestructuring roles eg indicating that the speaker should go on talkingthese are usually referred to in the conversation analysis literature as quotcontinuersquot and have been studied extensively we expect recognition of backchannels to be useful because of their discoursestructuring role and because they seem to occur at certain kinds of syntactic boundaries detecting a backchannel may thus help in predicting utterance boundaries and surrounding lexical materialfor an intuition about what backchannels look like table 3 shows the most common realizations of the approximately 300 types of backchannel in our switchboard subsetthe following table shows examples of backchannels in the context of a switchboard conversation in the fall and then the money that i will be making this summer we will be putting away for the college fundturn exits and abandoned utterancesabandoned utterances are those that the speaker breaks off without finishing and are followed by a restartturn exits resemble abandoned utterances in that they are often syntactically broken off but they are used mainly as a way of passing speakership to the other speakerturn exits tend to be single words often so or ora statement we are from uh i am from ohio a statement and my wife from florida a turnexit so a backchannel lthhuh a hedge so i do not know a abandoned it is statement i am glad it is not the kind of problem i have to come up with an answer to because it is not answers and agreementsyesanswers include yes yeah yep uhhuh and other variations on yes when they are acting as an answer to a yesnoquestion or declarativequestionsimilarly we also coded noanswersdetecting answers can help tell us that the previous utterance was a yesnoquestionanswers are also semantically significant since they are likely to contain new informationagreementaccept reject and maybeacceptpart all mark the degree to which a speaker accepts some previous proposal plan opinion or statementthe most common of these are the agreementacceptsthese are very often yes or yeah so they look a lot like answersbut where answers follow questions agreements often follow opinions or proposals so distinguishing these can be important for the discoursewe will now describe the mathematical and computational framework used in our studyour goal is to perform da classification and other tasks using a probabilistic formulation giving us a principled approach for combining multiple knowledge sources as well as the ability to derive model parameters automatically from a corpus using statistical inference techniquesgiven all available evidence e about a conversation the goal is to find the da sequence you that has the highest posterior probability p given that evidenceapplying bayes rule we get lihood of you given the evidencethe likelihood is usually much more straightforward to model than the posterior itselfthis has to do with the fact that our models are generative or causal in nature ie they describe how the evidence is produced by the underlying da sequence you estimating p requires building a probabilistic discourse grammar ie a statistical model of da sequencesthis can be done using familiar techniques from language modeling for speech recognition although the sequenced objects in this case are da labels rather than words discourse grammars will be discussed in detail in section 4the computation of likelihoods p depends on the types of evidence usedin our experiments we used the following sources of evidence either alone or in combination transcribed words the likelihoods used in equation 1 are p where w refers to the true words spoken in a conversationrecognized words the evidence consists of recognizer acoustics a and we seek to compute pas described later this involves considering multiple alternative recognized word sequencesprosodic features evidence is given by the acoustic features f capturing various aspects of pitch duration energy etc of the speech signal the associated likelihoods are pfor ease of reference all random variables used here are summarized in table 4the same variables are used with subscripts to refer to individual utterancesfor example wi is the word transcription of the ith utterance within a conversation to make both the modeling and the search for the best da sequence feasible we further require that our likelihood models are decomposable by utterancethis means that the likelihood given a complete conversation can be factored into likelihoods given the individual utteranceswe use li for the ith da label in the sequence you ie you and an answer might actually be relevant to the question before it violating the independence of the psimilarly speakers adjust their pitch or volume over time eg to the conversation partner or because of the structure of the discourse violating the independence of the pas in other areas of statistical modeling we count on the fact that these violations are small compared to the properties actually modeled namely the dependence of e1 on llireturning to the prior distribution of da sequences p it is convenient to make certain independence assumptions here tooin particular we assume that the prior distribution of l is markovian ie that each you depends only on a fixed number k of preceding da labels the ngrambased discourse grammars we used have this propertyas described later k 1 is a very good choice ie conditioning on the da types more than one removed from the current one does not improve the quality of the model by much at least with the amount of data available in our experimentsthe importance of the markov assumption for the discourse grammar is that we can now view the whole system of discourse grammar and local utterancebased likelihoods as a kthorder hidden markov model the hmm states correspond to das observations correspond to utterances transition probabilities are given by the discourse grammar and observation probabilities are given by the local likelihoods pwe can represent the dependency structure as a special case of bayesian belief network figure 1 shows the variables in the resulting hmm with directed edges representing conditional dependenceto keep things simple a firstorder hmm is assumedthe hmm representation allows us to use efficient dynamic programming algorithms to compute relevant aspects of the model such as the viterbi algorithm for hmms finds the globally most probable state sequencewhen applied to a discourse model with locally decomposable likelihoods and markovian discourse grammar it will therefore find precisely the da stolcke et al dialogue act modeling sequence with the highest posterior probability the combination of likelihood and prior modeling hmms and viterbi decoding is fundamentally the same as the standard probabilistic approaches to speech recognition and tagging it maximizes the probability of getting the entire da sequence correct but it does not necessarily find the da sequence that has the most da labels correct to minimize the total number of utterance labeling errors we need to maximize the probability of getting each da label correct individually ie we need to maximize p for each i 1 n we can compute the perutterance posterior da probabilities by summing where the summation is over all sequences you whose ith element matches the label in questionthe summation is efficiently carried out by the forwardbackward algorithm for hmms 3 for zerothorder discourse grammars viterbi decoding and forwardbackward decoding necessarily yield the same resultshowever for higherorder discourse grammars we found that forwardbackward decoding consistently gives slightly better accuracies as expectedtherefore we used this method throughoutthe formulation presented here as well as all our experiments uses the entire conversation as evidence for da classificationobviously this is possible only during offline processing when the full conversation is availableour paradigm thus follows historical practice in the switchboard domain where the goal is typically the offline processing of entire previously recorded conversationshowever the hmm formulation used here also supports computing posterior da probabilities based on partial evidence eg using only the utterances preceding the current one as would be required for online processingthe statistical discourse grammar models the prior probabilities p of da sequencesin the case of conversations for which the identities of the speakers are known the discourse grammar should also model turntaking behaviora straightforward approach is to model sequences of pairs our discourse grammars thus had a vocabulary of 42 x 2 84 labels plus tags for the beginning and end of conversationsfor example the second da tag in table 1 would be predicted by a trigram discourse grammar using the fact that the same speaker previously uttered a yesnoquestion which in turn was preceded by the startofconversationa computationally convenient type of discourse grammar is an ngram model based on da tags as it allows efficient decoding in the hmm frameworkwe trained standard backoff ngram models using the frequency smoothing approach of witten and bell models of various orders were compared by their perplexities ie the average number of choices the model predicts for each tag conditioned on the preceding tagstable 5 shows perplexities for three types of models p the das alone p the combined daspeaker id sequence and p the das conditioned on known speaker ids as expected we see an improvement for increasing ngram orderhowever the incremental gain of a trigram is small and higherorder models did not prove usefulcomparing p and p we see that speaker identity adds substantial information especially for higherorder modelsthe relatively small improvements from higherorder models could be a result of lack of training data or of an inherent independence of das from das further removedthe nearoptimality of the bigram discourse grammar is plausible given conversation analysis accounts of discourse structure in terms of adjacency pairs inspection of bigram probabilities estimated from our data revealed that conventional adjacency pairs receive high probabilities as expectedfor example 30 of yesnoquestions are followed by yesanswers 14 by noanswers commands are followed by agreements in 23 of the cases and statements elicit backchannels in 26 of all caseswe also investigated nonngram discourse models based on various language modeling techniques known from speech recognitionone motivation for alternative models is that ngrams enforce a onedimensional representation on da sequences whereas we saw above that the event space is really multidimensional another motivation is that ngrams fail to model longdistance dependencies such as the fact that speakers may tend to repeat certain das or patterns throughout the conversationthe first alternative approach was a standard cache model which boosts the probabilities of previously observed unigrams and bigrams on the theory that tokens tend to repeat themselves over longer distanceshowever this does not seem to be true for da sequences in our corpus as the cache model showed no improvement over the standard ngramthis result is somewhat surprising since unigram dialogue grammars are able to detect speaker gender with 63 accuracy on switchboard indicating that there are global variables in the da distribution that could potentially be exploited by a cache dialogue grammarclearly dialogue grammar adaptation needs further researchsecond we built a discourse grammar that incorporated constraints on da sequences in a nonhierarchical way using maximum entropy estimation the choice of features was informed by similar ones commonly used in statistical language models as well our general intuitions about potentially informationbearing elements in the discourse contextthus the model was designed so that the current da label was constrained by features such as unigram statistics the previous da and the da once removed das occurring within a window in the past and whether the previous utterance was by the same speakerwe found however that an me model using ngram constraints performed only slightly better than a corresponding backoff ngramadditional constraints such as da triggers distance1 bigrams separate encoding of speaker change and bigrams to the last da on the sameother channel did not improve relative to the trigram modelthe me model thus confirms the adequacy of the backoff ngram approach and leads us to conclude that da sequences at least in the switchboard domain are mostly characterized by local interactions and thus modeled well by loworder ngram statistics for this taskfor more structured tasks this situation might be differenthowever we have found no further exploitable structurewe now describe in more detail how the knowledge sources of words and prosody are modeled and what automatic da labeling results were obtained using each of the knowledge sources in turnfinally we present results for a combination of all knowledge sourcesda labeling accuracy results should be compared to a baseline accuracy of 35 the relative frequency of the most frequent da type in our test set4 51 dialogue act classification using words da classification using words is based on the observation that different das use distinctive word stringsit is known that certain cue words and phrases can serve as explicit indicators of discourse structuresimilarly we find distinctive correlations between certain phrases and da typesfor example 924 of the uhhuh occur in backchannels and 884 of the trigrams quot do youquot occur in yesnoquestionsto leverage this information source without handcoding knowledge about which words are indicative of which das we will use statistical language models that model the full word sequences associated with each da type511 classification from true wordsassuming that the true words of utterances are given as evidence we can compute wordbased likelihoods p in a straightforward way by building a statistical language model for each of the 42 dasall das of a particular type found in the training corpus were pooled and a daspecific trigram model was estimated using standard techniques modified bayes network including word hypotheses and recognizer acoustics the above approach is only a partial solution since we are not yet able to recognize words in spontaneous speech with perfect accuracya standard approach is to use the 1best hypothesis from the speech recognizer in place of the true word transcriptswhile conceptually simple and convenient this method will not make optimal use of all the information in the recognizer which in fact maintains multiple hypotheses as well as their relative plausibilitiesa more thorough use of recognized speech can be derived as followsthe classification framework is modified such that the recognizer acoustic information a appear as the evidencewe compute p by decomposing it into an acoustic likelihood p and a wordbased likelihood p and summing over all word sequences poi poi w youp e poi wp the second line is justified under the assumption that the recognizer acoustics are invariant to da type once the words are fixednote that this is another approximation in our modelingfor example different das with common words may be realized by different word pronunciationsfigure 2 shows the bayes network resulting from modeling recognizer acoustics through word hypotheses under this independence assumption note the added w variables in comparison to figure 1the acoustic likelihoods p correspond to the acoustic scores the recognizer outputs for every hypothesized word sequence w the summation over all w must be approximated in our experiments we summed over the 2500 best hypotheses generated by the recognizer for each utterancecare must be taken to scale the recognizer acoustic scores properly ie to exponentiate the recognizer acoustic scores by 1a where a is the language model weight of the recognizer5 5 in a standard recognizer the total log score of a hypothesis wi is computed as log p a log p wil where i wil is the number of words in the hypothesis and both a and a are parameters optimized to minimize the word error ratethe word insertion penalty p represents a correction to the language model that allows balancing insertion and deletion errorsthe language model weight a compensates for acoustic score variances that are effectively too large due to severe independence assumptions in the recognizer acoustic modelaccording to this rationale it is more appropriate to divide all score components by athus in all our experiments we computed a summand in equation 6 whose word and recognizerbased likelihoods with the ngram discourse grammars described earlierthe best accuracy obtained from transcribed words 71 is encouraging given a comparable human performance of 84 we observe about a 21 relative increase in classification error when using recognizer words this is remarkably small considering that the speech recognizer used had a word error rate of 41 on the test setwe also compared the nbest da classification approach to the more straightforward 1best approachin this experiment only the single best recognizer hypothesis is used effectively treating it as the true word stringthe 1best method increased classification error by about 7 relative to the nbest algorithm we also investigated prosodic information ie information independent of the words as well as the standard recognizer acousticsprosody is important for da recognition for two reasonsfirst as we saw earlier wordbased classification suffers from recognition errorssecond some utterances are inherently ambiguous based on words alonefor example some yesnoquestions have word sequences identical to those of statements but can often be distinguished by their final fo risea detailed study aimed at automatic prosodic classification of das in the switchboard domain is available in a companion paper here we investigate the interaction of prosodic models with the dialogue grammar and the wordbased da models discussed abovewe also touch briefly on alternative machine learning models for prosodic features tures computed automatically from the waveform without reference to word or phone informationthe features can be broadly grouped as referring to duration pauses pitch energy by anote that for selecting the best hypothesis in a recognizer only the relative magnitudes of the score weights matter however for the summation in equation 6 the absolute values become importantthe parameter values for a and p were those used by the standard recognizer they were not specifically optimized for the da classification taskdecision tree for the classification of backchannels and agreements each node is labeled with the majority class for that node as well as the posterior probabilities of the two classesthe following features are queried in the tree number of frames in continuous speech regions total utterance duration utterance duration excluding pauses 100 ms and mean signaltonoise ratio noise ratio snr speaking rate and gender in the case of utterance duration the measure correlates both with length in words and with overall speaking ratethe gender feature that classified speakers as either male or female was used to test for potential inadequacies in fo normalizationswhere appropriate we included both raw features and values normalized by utterance andor conversationwe also included features that are the output of the pitch accent and boundary tone event detector of taylor a complete description of prosodic features and an analysis of their usage in our models can be found in shriberg et al sion trees decision trees allow the combination of discrete and continuous features and can be inspected to help in understanding the role of different features and feature combinationsto illustrate one area in which prosody could aid our classification task we applied trees to da classifications known to be ambiguous from words aloneone frequent example in our corpus was the distinction between backchannels and agreements which share terms such as right and yeahas shown in figure 3 a prosodic tree trained on this task revealed that agreements have consistently longer durations and greater energy than do backchannelsthe hmm framework requires that we compute prosodic likelihoods of the form p for each utterance ut and associated prosodic feature values f we have the apparent difficulty that decision trees give estimates for the posterior probabilities pthe problem can be overcome by applying bayes rule locally note that p does not depend on you and can be treated as a constant for the purpose of da classificationa quantity proportional to the required likelihood can therefore be obtained either by dividing the posterior tree probability by the prior p6 or by training the tree on a uniform prior distribution of da typeswe chose the second approach downsampling our training data to equate da proportionsthis also counteracts a common problem with tree classifiers trained on very skewed distributions of target classes ie that lowfrequency classes are not modeled in sufficient detail because the majority class dominates the treegrowing objective function tion of prosody with other knowledge sources we trained a single tree to discriminate among the five most frequent da types and an other category comprising all remaining da typesthe decision tree was trained on a downsampled training subset containing equal proportions of these six da classesthe tree achieved a classification accuracy of 454 on an independent test set with the same uniform sixclass distributionthe chance accuracy on this set is 166 so the tree clearly extracts useful information from the prosodic featureswe then used the decision tree posteriors as scaled da likelihoods in the dialogue model hmm combining it with various ngram dialogue grammars for testing on our full standard test setfor the purpose of model integration the likelihoods of the other class were assigned to all da types comprised by that classas shown in table 7 the tree with dialogue grammar performs significantly better than chance on the raw da distribution although not as well as the wordbased methods networks compare to decision trees for the type of data studied hereneural networks are worth investigating since they offer potential advantages over decision treesthey can learn decision surfaces that lie at an angle to the axes of the input feature space unlike standard cart trees which always split continuous features on one dimension at a timethe response function of neural networks is continuous at the decision boundaries allowing them to avoid hard decisions and the complete fragmentation of data associated with decision tree questionsmost important however related work indicated that similarly structured networks are superior classifiers if the input features are words and are therefore a plugin replacement for the language model classifiers described in this paperneural networks are therefore a good candidate for a jointly optimized classifier of prosodic and wordlevel information since one can show that they are a generalization of the integration approach used herewe tested various neural network models on the same sixclass downsampled data used for decision tree training using a variety of network architectures and output layer functionsthe results are summarized in table 8 along with the baseline result obtained with the decision tree modelbased on these experiments a softmax network without hidden units resulted in only a slight improvement over the decision treea network with hidden units did not afford any additional advantage even after we optimized the number of hidden units indicating that complex combinations of features do not predict das better than linear combinations of input featureswhile we believe alternative classifier architectures should be investigated further as prosodic models the results so far seem to confirm our choice of decision trees as a model class that gives close to optimal performance for this task525 intonation event likelihoodsan alternative way to compute prosodically based da likelihoods uses pitch accents and boundary phrases the approach relies on the intuition that different utterance types are characterized by different intonational quottunesquot and has been successfully applied to the classification of move types in the dciem map task corpus the system detects sequences of distinctive pitch patterns by training one continuousdensity hmm for each da typeunfortunately the event classification accuracy on the switchboard corpus was considerably poorer than in the map task domain and da recognition results when coupled with a discourse grammar were substantially worse than with decision treesthe approach could prove valuable in the future however if the intonation event detector can be made more robust to corpora like outsbayes network for discourse hmm incorporating both word recognition and prosodic featuresas mentioned earlier we expect improved performance from combining word and prosodic informationcombining these knowledge sources requires estimating a combined likelihood p for each utterancethe simplest approach is to assume that the two types of acoustic observations are approximately conditionally independent once li is given since the recognizer acoustics are modeled by way of their dependence on words it is particularly important to avoid using prosodic features that are directly correlated with word identities or features that are also modeled by the discourse grammars such as utterance position relative to turn changesfigure 4 depicts the bayes network incorporating evidence from both word recognition and prosodic featuresone important respect in which the independence assumption is violated is in the modeling of utterance lengthwhile utterance length itself is not a prosodic feature it is an important feature to condition on when examining prosodic characteristics of utterances and is thus best included in the decision treeutterance length is captured directly by the tree using various duration measures while the daspecific lms encode the average number of words per utterance indirectly through ngram parameters but still accurately enough to violate independence in a significant way as discussed in section 8 this problem is best addressed by joint lexicalprosodic modelswe need to allow for the fact that the models combined in equation 8 give estimates of differing qualitiestherefore we introduce an exponential weight a on p each half was used to separately optimize the parameters and the best values were then tested on the respective other halfthe reported results are from the aggregate outcome on the two test set halves on recognized words with the top5 tree classifier mentioned in section 523results are summarized in table 9as shown the combined classifier presents a slight improvement over the recognizerbased classifierthe experiment without discourse grammar indicates that the combined evidence is considerably stronger than either knowledge source alone yet this improvement seems to be made largely redundant by the use of priors and the discourse grammarfor example by definition declarativequestions are not marked by syntax and are thus confusable with statements and opinionswhile prosody is expected to help disambiguate these cases the ambiguity can also be removed by examining the context of the utterance eg by noticing that the following utterance is a yesanswer or noansweras shown the combined classifier was consistently more accurate than the classifier using words alonealthough the gain in accuracy was not statistically significant for the small recognizer test set because of a lack of power replication for a larger handtranscribed test set showed the gain to be highly significant for both subtasks by a sign test p p 020the 1best da and the two mixture models also did not differ significantly on this test setin interpreting these results one must realize however that wer results depend on a complex combination of factors most notably interaction between language models and the acoustic modelssince the experiments only varied the language models used in rescoring it is also informative to compare the quality of these models as reflected by perplexityon this measure we see a substantial 13 reduction which is achieved by both the oracle and the mixtureoflmsthe perplexity reduction for the 1best lm is only 98 showing the advantage of the mixture approachto better understand the lack of a more substantial reduction in word error we analyzed the effect of the daconditioned rescoring on the individual das ie grouping the test utterances by their true da typestable 12 shows the wer improvements for a few da types ordered by the magnitude of improvement achievedas shown all frequent da types saw improvement but the highest wins were observed for typically short das such as answers and backchannelsthis is to be expected as such das tend to be syntactically and lexically highly constrainedfurthermore the distribution of number of words across da types is very uneven statements and opinions the da types dominating in both frequency and number of words see no more than 05 absolute improvement thus explaining the small overall improvementin hindsight this is also not surprising since the bulk of the training data for the baseline lm consists of these das allowing only little improvement in the daspecific lmsa more detailed analysis of the effect of da modeling on speech recognition errors can be found elsewhere in summary our experiments confirmed that da modeling can improve word recognition accuracy quite substantially in principle at least for certain da types but that the skewed distribution of das limits the usefulness of the approach on the switchboard corpusthe benefits of da modeling might therefore be more pronounced on corpora with more even da distribution as is typically the case for taskoriented dialoguestaskoriented dialogues might also feature specific subtypes of general da categories that might be constrained by discourseprior research on taskoriented dialogues summarized in the next section however has also found only small reductions in wer this suggests that even in taskoriented domains more research is needed to realize the potential of da modeling for asras indicated in the introduction our work builds on a number of previous efforts in computational discourse modeling and automatic discourse processing most of which occurred over the last halfdecadeit is generally not possible to directly compare quantitative results because of vast differences in methodology tag set type and amount of training data and principally assumptions made about what information is available for quotfreequot thus we will focus on the conceptual aspects of previous research efforts and while we do offer a summary of previous quantitative results these should be interpreted as informative datapoints only and not as fair comparisons between algorithmsprevious research on da modeling has generally focused on taskoriented dialogue with three tasks in particular garnering much of the research effortthe map task corpus consists of conversations between two speakers with slightly different maps of an imaginary territorytheir task is to help one speaker reproduce a route drawn only on the other speaker map all without being able to see each other mapsof the da modeling algorithms described below taylor et al and wright were based on map taskthe verbmobil corpus consists of twoparty scheduling dialoguesa number of the da modeling algorithms described below were developed for verbmobil including those of mast et al warnke et al reithinger et al reithinger and klesen and samuel carberry and vijayshanker the atr conference corpus is a subset of a larger atr dialogue database consisting of simulated dialogues between a secretary and a questioner at international conferencesresearchers using this corpus include nagata nagata and morimoto and kita et al table 13 shows the most commonly used versions of the tag sets from those three tasksas discussed earlier these domains differ from the switchboard corpus in being taskorientedtheir tag sets are also generally smaller but some of the same problems of balance occurfor example in the map task domain 33 of the words occur in 1 of the 12 das table 14 shows the approximate size of the corpora the tag set and tag estimation accuracy rates for various recent models of da predictionthe results summarized in the table also illustrate the differences in inherent difficulty of the tasksfor example the task of warnke et al was to simultaneously segment and tag das whereas the other results rely on a prior manual segmentationsimilarly the task in wright and in our study was to determine da types from speech input whereas work by others is based on handtranscribed textual inputdialogue act tag sets used in three other extensively studied corporaverbmobilthese 18 highlevel das used in verbmobil1 are abstracted over a total of 43 more specific das most experiments on verbmobil das use the set of 18 rather than 43examples are from jekat et al atrthe 9 das used in the atr dialogue database task some later models used an extended set of 15 dasexamples are from the english translations given by nagata the use of ngrams to model the probabilities of da sequences or to predict upcoming das online has been proposed by many authorsit seems to have been first employed by nagata and in followup papers by nagata and morimoto on the atr dialogue databasethe model predicted upcoming das by using bigrams and trigrams conditioned on preceding das trained on a corpus of 2722 dasmany others subsequently relied on and enhanced this ngramsofdas approach often by applying standard techniques from statistical language modelingreithinger et al for example used deleted interpolation to smooth the dialogue ngramschucarroll uses knowledge of subdialogue structure to selectively skip previous das in choosing conditioning for da predictionnagata and morimoto may also have been the first to use word ngrams as a miniature grammar for das to be used in improving speech recognitionthe idea caught on very quickly suhm and waibel mast et al warnke et al reithinger and klesen and taylor et al all use variants of backoff interpolated or class ngram language models to estimate da likelihoodsany kind of sufficiently powerful trainable language model could perform this function of course and indeed alexandersson and reithinger propose using automatically learned stochastic contextfree grammarsjurafsky shriberg fox and curl show that the grammar of some das such as appreciations can be captured by finitestate automata over partofspeech tagsngram models are likelihood models for das ie they compute the conditional probabilities of the word sequence given the da typewordbased posterior probability estimators are also possible although less commonmast et al propose the use of semantic classification trees a kind of decision tree conditioned on word patterns as featuresfinally ries shows that neural networks using only unigram features can be superior to higherorder ngram da modelswarnke et al and ohler harbeck and niemann use related discriminative training algorithms for language modelswoszczyna and waibel and suhm and waibel followed by chucarroll seem to have been the first to note that such a combination of word and dialogue ngrams could be viewed as a dialogue hmm with word strings as the observations all models listed in table 14 rely on some version of this hmm metaphorsome researchers explicitly used hmm induction techniques to infer dialogue grammarswoszczyna and waibel for example trained an ergodic hmm using expectationmaximization to model speech act sequencingkita et al made one of the few attempts at unsupervised discovery of dialogue structure where a finitestate grammar induction algorithm is used to find the topology of the dialogue grammarcomputational approaches to prosodic modeling of das have aimed to automatically extract various prosodic parameterssuch as duration pitch and energy patternsfrom the speech signal some approaches model fo patterns with techniques such as vector quantization and gaussian classifiers to help disambiguate utterance typesan extensive comparison of the prosodic da modeling literature with our work can be found in shriberg et al da modeling has mostly been geared toward automatic da classification and much less work has been done on applying da models to automatic speech recognitionnagata and morimoto suggest conditioning word language models on das to lower perplexitysuhm and waibel and eckert gallwitz and niemann each condition a recognizer lm on lefttoright da predictions and are able to show reductions in word error rate of 1 on taskoriented corporamost similar to our own work but still in a taskoriented domain the work by taylor et al combines da likelihoods from prosodic models with those from 1best recognition output to condition the recognizer lm again achieving an absolute reduction in word error rate of 1 as disappointing as the 03 improvement in our experimentsrelated computational tasks beyond da classification and speech recognition have received even less attention to datewe already mentioned warnke et al and finke et al who both showed that utterance segmentation and classification can be integrated into a single search processfukada et al investigate augmenting da tagging with more detailed semantic quotconceptquot tags as a preliminary step toward an interlinguabased dialogue translation systemlevin et al couple da classification with dialogue game classification dialogue games are units above the da level ie short da sequences such as questionanswer pairsall the work mentioned so far uses statistical models of various kindsas we have shown here such models offer some fundamental advantages such as modularity and composability and the ability to deal with noisy input in a principled wayhowever many other classifier architectures are applicable to the tasks discussed in particular to da classificationa nonprobabilistic approach for da labeling proposed by samuel carberry and vijayshanker is transformationbased learning finally it should be noted that there are other tasks with a mathematical structure similar to that of da tagging such as shallow parsing for natural language processing and dna classification tasks from which further techniques could be borrowedhow does the approach presented here differ from these various earlier models particularly those based on hmmsapart from corpus and tag set differences our approach differs primarily in that it generalizes the simple hmm approach to cope with new kinds of problems based on the bayes network representations depicted in figures 2 and 4for the da classification task our framework allows us to do classification given unreliable words and given nonlexical evidencefor the speech recognition task the generalized model gives a clean probabilistic framework for conditioning word probabilities on the conversation context via the underlying da structureunlike previous models that did not address speech recognition or relied only on an intuitive 1best approximation our model allows computation of the optimum word sequence by effectively summing over all possible da sequences as well as all recognition hypotheses throughout the conversation using evidence from both past and future8discussion and issues for future research our approach to dialogue modeling has two major components statistical dialogue grammars modeling the sequencing of das and da likelihood models expressing the local cues for daswe made a number of significant simplifications to arrive at a computationally and statistically tractable formulationin this formulation das serve as the hinges that join the various model components but also decouple these components through statistical independence assumptionsconditional on the das the observations across utterances are assumed to be independent and evidence of different kinds from the same utterance is assumed to be independentfinally da types themselves are assumed to be independent beyond a short span further research within this framework can be characterized by which of these simplifications are addresseddialogue grammars for conversational speech need to be made more aware of the temporal properties of utterancesfor example we are currently not modeling the fact that utterances by the conversants may actually overlap in addition we should model more of the nonlocal aspects of discourse structure despite our negative results so farfor example a contextfree discourse grammar could potentially account for the nested structures proposed in grosz and sidner the standard ngram models for da discrimination with lexical cues are probably suboptimal for this task simply because they are trained in the maximum likelihood framework without explicitly optimizing discrimination between da typesthis may be overcome by using discriminative training procedures training neural networks directly with posterior probability seems to be a more principled approach and it also offers much easier integration with other knowledge sourcesprosodic features for example can simply be added to the lexical features allowing the model to capture dependencies and redundancies across knowledge sourceskeywordbased techniques from the field of message classification should also be applicable here eventually it is desirable to integrate dialogue grammar lexical and prosodic cues into a single model eg one that predicts the next da based on da history and all the local evidencethe study of automatically extracted prosodic features for da modeling is likewise only in its infancyour preliminary experiments with neural networks have shown that small gains are obtainable with improved statistical modeling techniqueshowever we believe that more progress can be made by improving the underlying features themselves in terms of both better understanding of how speakers use them and ways to reliably extract them from dataregarding the data itself we saw that the distribution of das in our corpus limits the benefit of da modeling for lowerlevel processing in particular speech recognitionthe reason for the skewed distribution was in the nature of the task in switchboardit remains to be seen if more finegrained da distinctions can be made reliably in this corpushowever it should be noted that the da definitions are really arbitrary as far as tasks other than da labeling are concernedthis suggests using unsupervised selforganizing learning schemes that choose their own da definitions in the process of optimizing the primary task whatever it may behandlabeled da categories may still serve an important role in initializing such an algorithmwe believe that dialoguerelated tasks have much to benefit from corpusdriven automatic learning techniquesto enable such research we need fairly large standardized corpora that allow comparisons over time and across approachesdespite its shortcomings the switchboard domain could serve this purposewe have developed an integrated probabilistic approach to dialogue act modeling for conversational speech and tested it on a large speech corpusthe approach combines models for lexical and prosodic realizations of das as well as a statistical discourse grammarall components of the model are automatically trained and are thus applicable to other domains for which labeled data is availableclassification accuracies achieved so far are highly encouraging relative to the inherent difficulty of the task as measured by human labeler performancewe investigated several modeling alternatives for the components of the model and found performance largely independent of these choicesfinally we developed a principled way of incorporating da modeling into the probability model of a continuous speech recognizer by constraining word hypotheses using the discourse contexthowever the approach gives only a small reduction in word error on our corpus which can be attributed to a preponderance of a single dialogue act type the research described here is based on a project at the 1997 workshop on innovative techniques in lvcsr at the center for speech and language processing at johns hopkins university the dalabeled switchboard transcripts as well as other projectrelated publications are available at httpwwwcolorado edulingjurafskyws97we thank the funders researchers and support staff of the 1997 johns hopkins summer workshop especially bill byrne fred jelinek harriet nock joe picone kimberly shirirtg and chuck wootersadditional support came from the nsf via grants iri9619921 and iri9314967 and from the uk engineering and physical science research council thanks to mitch weintraub to susann luperfoy nigel ward james allen julia hirschberg and marilyn walker for advice on the design of the swbddamsl tag set to the discourse labelers at cu boulder and the intonation labelers at the university of edinburgh we also thank andy kehler and the anonymous reviewers for valuable comments on a draft of this paper
J00-3003
dialogue act modeling for automatic tagging and recognition of conversational speechwe describe a statistical approach for modeling dialogue acts in conversational speech ie speechactlike units such as statement question backchannel agreement disagreement and apologyour model detects and predicts dialogue acts based on lexical collocational and prosodic cues as well as on the discourse coherence of the dialogue act sequencethe dialogue model is based on treating the discourse structure of a conversation as a hidden markov model and the individual dialogue acts as observations emanating from the model statesconstraints on the likely sequence of dialogue acts are modeled via a dialogue act ngramthe statistical dialogue grammar is combined with word ngrams decision trees and neural networks modeling the idiosyncratic lexical and prosodic manifestations of each dialogue actwe develop a probabilistic integration of speech recognition with dialogue modeling to improve both speech recognition and dialogue act classification accuracymodels are trained and evaluated using a large handlabeled database of 1155 conversations from the switchboard corpus of spontaneous humantohuman telephone speechwe achieved good dialogue act labeling accuracy and a small reduction in word recognition errorwe use hmms as a general model of discourse with an application to speech acts in conversations
a compressionbased algorithm for chinese word segmentation chinese is written without using spaces or other word delimiters although a text may be thought of as a corresponding sequence of words there is considerable ambiguity in the placement of boundaries interpreting a text as a sequence of words is beneficial for some information retrieval and storage tasks for example fulltext search wordbased compression and key phrase extraction we describe a scheme that infers appropriate positions for word boundaries using an adaptive language model that is standard in text compression it is trained on a corpus of presegmented text and when applied to new text interpolates word boundaries so as to maximize the compression obtained this simple and general method performs well with respect to specialized schemes for chinese language segmentation chinese is written without using spaces or other word delimitersalthough a text may be thought of as a corresponding sequence of words there is considerable ambiguity in the placement of boundariesinterpreting a text as a sequence of words is beneficial for some information retrieval and storage tasks for example fulltext search wordbased compression and key phrase extractionwe describe a scheme that infers appropriate positions for word boundaries using an adaptive language model that is standard in text compressionit is trained on a corpus of presegmented text and when applied to new text interpolates word boundaries so as to maximize the compression obtainedthis simple and general method performs well with respect to specialized schemes for chinese language segmentationlanguages such as chinese and japanese are written without using any spaces or other word delimiters indeed the western notion of a word boundary is literally alien nevertheless words are present in these languages and chinese words often comprise several characters typically two three or fourfivecharacter words also exist but they are raremany characters can stand alone as words in themselves while on other occasions the same character is the first or second character of a twocharacter word and on still others it participates as a component of a three or fourcharacter wordthis phenomenon causes obvious ambiguities in word segmentationreaders unfamiliar with chinese can gain an appreciation of the problem of multiple interpretations from figure 1 which shows two alternative interpretations of the same chinese character sequencethe text is a joke that relies on the ambiguity of phrasingonce upon a time the story goes a man set out on a long journeybefore he could return home the rainy season began and he had to take shelter at a friend housebut he overstayed his welcome and one day his friend wrote him a note the first line in figure 1the intended interpretation is shown in the second line which means quotit is raining the god would like the guest to stayalthough the god wants you to stay i do notquot on seeing the note the visitor took the hint and prepared to leaveas a joke he amended the note with the punctuation shown in the third line which leaves three sentences whose meaning is totally differentquotthe rainy day the staying daywould you like me to staysurequot example of treating each character in a query as a wordthis example relies on ambiguity of phrasing but the same kind of problem can arise with word segmentationfigure 2 shows a more prosaic examplefor the ordinary sentence of the first line there are two different interpretations depending on the context of the sentence quoti like new zealand flowersquot and quoti like fresh broccoliquot respectivelythe fact that machinereadable chinese text is invariably stored in unsegmented form causes difficulty in applications that use the word as the basic unitfor example search engines index documents by storing a list of the words they contain and allow the user to retrieve all documents that contain a specified combination of query termsthis presupposes that the documents are segmented into wordsfailure to do so and treating every character as a word in itself greatly decreases the precision of retrieval since large numbers of extraneous documents are returned that contain characters but not words from the queryfigure 3 illustrates what happens when each character in a query is treated as a singlecharacter wordthe intended query is quotphysicsquot or quotphysicistquot the first character returns documents about such things as quotevidencequot quotproductsquot quotbodyquot quotimagequot quotpricesquot while the second returns documents about quottheoryquot quotbarberquot and so onthus many documents that are completely irrelevant to the query will be returned causing the precision of information retrieval to decrease greatlysimilar problems occur in wordbased compression speech recognition and so onit is true that most search engines allow the user to search for multiword phrases by enclosing them in quotation marks and this facility could be used to search for multicharacter words in chinesethis however runs the risk of retrieving irrelevant documents in which the same characters occur in sequence but with a different intended segmentationmore importantly it imposes on the user an artificial requirement to perform manual segmentation on each fulltext queryword segmentation is an important prerequisite for such applicationshowever it is a difficult and illdefined taskaccording to sproat et al and wu and fung experiments show that only about 75 agreement between native speakers is to be expected on the quotcorrectquot segmentation and the figure reduces as more people become involvedthis paper describes a general scheme for segmenting text by inferring the position of word boundaries thus supplying a necessary preprocessing step for applications like those mentioned aboveunlike other approaches which involve a dictionary of legal words and are therefore languagespecific it works by using a corpus of alreadysegmented text for training and thus can easily be retargeted for any language for which a suitable corpus of segmented material is availableto infer word boundaries a general adaptive text compression technique is used that predicts upcoming characters on the basis of their preceding contextspaces are inserted into positions where their presence enables the text to be compressed more effectivelythis approach means that we can capitalize on existing research in text compression to create good models for word segmentationto build a segmenter for a new language the only resource required is a corpus of segmented text to train the compression modelthe structure of this paper is as follows the next section reviews previous work on the chinese segmentation problemthen we explain the operation of the adaptive text compression technique that will be used to predict word boundariesnext we show how space insertion can be viewed as a problem of hidden markov modeling and how higherorder models such as the ones used in text compression can be employed in this waythe following section describes several experiments designed to evaluate the success of the new word segmenterfinally we discuss the application of language segmentation in digital librariesour system for segmenting chinese text is available on the world wide web at httpwwwnzdlorgcgibincongbit takes gbencoded input text which can be cut from a chinese document and pasted into the input windowonce the segmenter has been invoked the result is rewritten into the same windowthe problem of segmenting chinese text has been studied by researchers for many years see wu and tseng for a detailed surveyseveral different algorithms have been proposed which generally speaking can be classified into dictionarybased and statisticalbased methods although other techniques that involve more linguistic information such as syntactic and semantic knowledge have been reported in the natural language processing literaturecheng young and wong describe a dictionarybased methodgiven a dictionary of frequently used chinese words an input string is compared with words in the dictionary to find the one that matches the greatest number of characters of the inputthis is called the maximum forward match heuristican alternative is to work backwards through the text resulting in the maximum backward match heuristicit is easy to find situations where these failto use an english example forward matching fails on the input quotthe red quot while backward matching fails on text ending quot his carquot analogous failures occur with chinese textdai khoo and loh use statistical methods to perform text segmentationthey concentrate on twocharacter words because two characters is the most common word length in chineseseveral different notions of frequency of characters and bigrams are explored relative frequency document frequency weighted document frequency and local frequencythey also look at both contextual and positional informationcontextual information is found to be the single most important factor that governs the probability that a bigram forms a word incorporating the weighted document frequency can improve the model significantlyin contrast the positional frequency is not found to be helpful in determining wordsponte and croft introduce two models for word segmentation wordbased and bigram modelsboth utilize probabilistic automatain the wordbased method a suffix tree of words in the lexicon is used to initialize the modeleach node is associated with a probability which is estimated by segmenting training text using the longest match strategythis makes the segmenter easy to transplant to new languagesthe bigram model uses the lexicon to initialize probability estimates for each bigram and the probability with which each bigram occurs and uses the baumwelch algorithm to update the probabilities as the training text is processedhockenmaier and brew present an algorithm based on palmer experiments that applies a symbolic machine learning techniquetransformationbased errordriven learning to the problem of chinese word segmentationusing a set of rule templates and four distinct initialstate annotators palmer concludes that the learning technique works wellhockenmaier and brew investigate how performance is influenced by different rule templates and corpus sizethey use three rule templates simple bigram rules trigram rules and more elaborate rulestheir experiments indicate that training data size has the most significant influence on performancegood performance can be acquired using simple rules only if the training corpus is large enoughlee ng and lu have recently introduced a new segmentation method for a chinese spellchecking applicationusing a dictionary with singlecharacter word occurrence frequencies this scheme first divides text into sentences then into phrases and finally into words using a small number of word combinations that are conditioned on a heuristic to avoid delay during spellcheckingwhen compared with forward maximum matching the new method resolves more than 10 more ambiguities but enjoys no obvious speed advantagethe way in which chinese characters are used in names differs greatly from the way they are used in ordinary text and some researchers notably sproat et al have established specialpurpose recognizers for chinese names designed to improve the accuracy of automatic segmenters by treating names speciallychinese names always take the form family name followed by given namewhereas family names are limited to a small group of characters given names can consist of any charactersthey normally comprise one or two characters but threecharacter names have arisen in recent years to ensure uniqueness when the family name is popularsuch as smith or jones in englishsproat et al implement special recognizers not only for chinese names and transliterated foreign names but for components of morphologically obtained words as wellthe approach we present is not specially tailored for name recognition but because it is fully adaptive it is likely that it would yield good performance on names if lists of names were provided as supplementary training textthis has not yet been testedstatistical language models are well developed in the field of text compressioncompression methods are usually divided into symbolwise and dictionary schemes symbolwise methods which generally make use of adaptively generated statistics give excellent compressionin fact they include the best known methodsalthough dictionary methods such as the zivlempel schemes perform less well they are used in practical compression utilities like unix compress and gzip because they are fastin our work we use the prediction by partial matching symbolwise compression scheme which has become a benchmark in the compression communityit generates quotpredictionsquot for each input symbol in turneach prediction takes the form of a probability distribution that is provided to an encoderthe encoder is usually an arithmetic coder the details of coding are of no relevance to this paperppm is an ngram approach that uses finitecontext models of characters where the previous few characters predict the upcoming onethe conditional probability distribution of characters conditioned on the preceding few characters is maintained and updated as each character of input is processedthis distribution along with the actual value of the preceding few characters is used to predict each upcoming symbolexactly the same distributions are maintained by the decoder which updates the appropriate distribution as each character is receivedthis is what we call adaptive modeling both encoder and decoder maintain the same modelsnot by communicating the models directly but by updating them in precisely the same wayrather than using a fixed context length the ppm method chooses a maximum context length and maintains statistics for this and all shorter contextsthe maximum is five in most of the experiments below and statistics are maintained for models of order 5 4 3 2 1 and 0these are not stored separately they are all kept in a single trie structureppm incorporates a simple and highly effective method to combine the predictions of the models of different orderoften called the problem of quotbackoffquot to encode the next symbol it starts with the maximumorder model if that model contains a prediction for the upcoming character the character is transmitted according to the order 5 distributionotherwise both encoder and decoder escape down to order 4there are two possible situationsif the order 5 contextthat is the preceding fivecharacter sequencehas not been encountered before then escape to order 4 is inevitable and both encoder and decoder can deduce that fact without requiring any communicationif not that is if the preceding five characters have been encountered in sequence before but not followed by the upcoming character then only the encoder knows that an escape is necessaryin this case therefore it must signal this fact to the decoder by transmitting an escape eventand space must be reserved for this event in every probability distribution that the encoder and decoder maintainppm model after processing the string tobeornottobe c count p prediction probabilityorder 2 c p order 1 c p order 0 c p prediction prediction prediction be o 1 12 b e 2 34 4 b 2 326 esc 1 12 4 esc 1 14 4 e 2 326 eo r 1 12 e o 1 12 n 1 126 esc 1 12 esc 1 12 4 o 4 726 no t 1 12 n 4 o 1 12 r 1 126 4 esc 1 12 esc 1 12 4 t 3 526 ob e 2 34 o b 2 38 4 esc 6 313 esc 1 14 r 1 18 order 1 or n 1 12 4 t 1 18 4 esc 1 12 esc 3 38 prediction c p ot t 1 12 are n 1 12 4 a 1 1ia1 4 esc 1 12 4 esc 1 12 right now p o 1 12 t 4 o 2 12 esc 1 12 t 1 16 to 4 b 2 34 esc 2 13 4 esc 1 14 tt 4 o 1 12 esc 1 12 once any necessary escape event has been transmitted and received both encoder and decoder agree that the upcoming character will be coded by the order 4 modelof course this may not be possible either and further escapes may take placeultimately the order 0 model may be reached in this case the character can be transmitted if it is one that has occurred beforeotherwise there is one further escape and the standard ascii representation of the character is sentthe only remaining question is how to calculate the probabilities from the counts a simple matter once we have resolved how much space to allocate for the escape probabilitythere has been much discussion of this question and several different methods have been proposedour experiments calculate the escape probability in a particular context as where n is the number of times that context has appeared and d is the number of different symbols that have directly followed it the probability of a character that has occurred c times in that context is since there are d such characters and their counts sum to n it is easy to confirm that the probabilities in the distribution sum to 1to illustrate the ppm modeling technique table 1 shows the model after the string tobeornottobe has been processedin this illustration the maximum model order is 2 and each prediction has a count c and a prediction probability p the probability is determined from the counts associated with the prediction using the formula that we discuss aboveia l is the size of the alphabet and it is this that determines the probability for each unseen characterthe model in table 1 is used as follows suppose the character following tobeornottobe is osince the order 2 context is be and the upcoming symbol has already been seen once in this context the order 2 model is used for encoding in this case and the encoding probability is 12thus the symbol o would be encoded in 1 bitif the next character instead of o were t this has not been seen in the current order 2 context consequently an order 2 escape event is coded and the context is truncated to e checking the order 1 model the upcoming character t has not been seen in this context so an order 1 escape event is coded and the context is truncated to the null context corresponding to the order 0 modelthe character t is finally encoded in this model with probability 526thus three encodings occur for this one character with probabilities 12 12 and 526 respectively which together amount to just over 5 bits of informationif the upcoming character had been x instead of t a final level of escape this time to order 0 would have occurred and the x would be encoded with a probability of 1256 for a total of just over 10 bitsit is clear from table 1 that in the context tobeornottobe if the next character is o it will be encoded by the order 2 modelhence if an escape occurs down to order 1 the next character cannot be othis makes it unnecessary to reserve probability space for the occurrence of o in the order 1 modelsthis idea which is called exclusion can be exploited to improve compressiona character that occurs at one level is excluded from all lowerorder predictions allowing a greater share of the probability space to be allocated to the other characters in these lowerorder models for example if the character b were to follow tobeornottobe it would be encoded with probabilities without exclusion leading to a coding requirement of 51 bitshowever if exclusion was exploited both encoder and decoder will recognize that escape from order 1 to order 0 is inevitable because the order 1 model adds no characters that were not already predicted by the order 2 modelthus the coding probabilities will be with exclusion reducing the total code space for b to 36 bitsan important special case of the exclusion policy occurs at the lowestlevel model for example the x at the end of the previous paragraph would finally be encoded with a probability of 1250 rather than 1256 because characters that have already occurred can never be predicted in the order 1 contextone slight further improvement to ppm is incorporated in the experiments deterministic scaling although it probably has negligible effect on our overall results we record it here for completenessexperiments show that in deterministic contexts for which d 1 the probability that the single character that has occurred before reappears is greater than the 1 1 implied by the above estimatorconsequently in this case the probability is increased in an ad hoc manner to 1 1inserting spaces into text can be viewed as a hidden markov modeling problembeing entirely adaptive the method works regardless of what language it is used withfor pedagogical purposes we will explain it with english textbetween every pair of characters lies a potential spacefigure 4 illustrates the model for the fragment tobeornottobeit contains one node for each letter in the text and one for each possible intercharacter space any given assignment of word boundaries to this text fragment will correspond to a path through the model from beginning to end of all possible paths we seek the one that gives the best compression according to the ppm text compression method suitably primed with english textthis path is the correct path corresponding to the text to be or not to be shown in bold in figure 441 markov modeling with context figure 4 can easily be converted into a markov model for a given order of ppmsuppose we use order 1 then we rewrite figure 4 so that the states are bigrams as shown in figure 5the interpretation of each state is that it corresponds to the last character of the string that labels the statethe very first state labeled t has no prior contextin ppm terms that character will be transmitted by escaping down to order 0 again the bold arrows in figure 5 shows the path corresponding to the string with spaces inserted correctlygrowing a tree for order 1 modeling of tobeornottobesimilar models could be written for higherorder versions of ppmfor example with an order 3 model states would be labeled by strings of length four and each state would have variants corresponding to all different ways of inserting space into the fourcharacter stringfor example the states corresponding to the sixth character of tobeornottobe would include beor and eor as well as eor eor and orit is not hard to see that the number of states corresponding to a particular character of the input string increases with model order according to the fibonnacci seriesfigure 5 shows two states per symbol for order 1 there are three states per symbol for order 2 five for order 3 eight for order 4 thirteen for order 5 and so ongiven a hidden markov model like the one in figure 5 where probabilities are supplied for each edge according to an order 1 compression model the space insertion problem is tantamount to finding the sequence of states through the model from beginning to end that maximizes the total probabilityor equivalently that minimizes the number of bits required to represent the text according to that modelthe following viterbistyle algorithm can be used to solve this problembeginning at the initial state the procedure traces through the model recording at each state the highest probability of reaching that state from the beginningthus the two descendants of the start node nodes to and t are assigned the probability of o and conditioned in each case on t being the prior character respectivelyas more arcs are traversed the associated probabilities are multiplied thus the node 0 receives the product of the probability of conditioned on t and of o conditioned on when the node ob is reached it is assigned the greater of the probabilities associated with the two incoming transitions and so on throughout the modelthis is the standard dynamic programming technique of storing with each state the result of the best way of reaching that state and using this result to extend the calculation to the next stateto find the optimal state sequence is simply a matter of recording with each state which incoming transition is associated with the greatest probability and traversing that path in the reverse direction once the final node is reachedthese models can be generated dynamically by proceeding to predict each character in turnfigure 6 shows the beginning of the tree that resultsfirst the initial node t is expanded into its two children t and tothen these are expanded in turnthe first has one child o because a space cannot be followed by another spacethe second has two o and obfigure 6 shows the further expansion of the o nodehowever the two children that are created already exist in the tree and so the existing versions of these nodes are used instead as in figure 6if this procedure is conthe space insertion procedure as implemented tinued the graph structure of figure 5 will be createdduring creation probability values can be assigned to the nodes and back pointers inserted to record the best path to each nodethe illustration in figure 6 is for an order 1 model but exactly the same procedure applies for higherorder ppm modelsour implementation uses a slight variant of the above procedure for finding the optimal place to insert spacesat each stage we consider the possibility of adding either the next character or the next character followed by a spacethis generates the structure shown in figure 7starting with the null string both t and t are generated as successor statesfrom each of these states either o or o can be added and these yield the next states shownthe procedure continues growing the trellis structure using an incremental strategy similar to that illustrated in figure 6 but modified to take into account the new growth strategy of adding either the next character or the next character followed by a spacethe search strategy we use is a variant of the stack algorithm for sequential decoding as new nodes are generated an ordered list is maintained of the best paths generated so faronly the best path is extendedthe metric used to evaluate a path is the number of bits required for the segmentation sequence it represents when compressed by the ppm modelit is necessary to delete paths from the list in order to make room for newly generated oneswe remove all paths that were more than m nodes shorter than the best path so far where m is the order of the ppm model we reasoned that it is extremely unlikelyat least for natural language sequencesthat such a path would ever grow to outperform the current best path because it already lags behind in code length despite the fact that m further letters must be encodedbefore describing experiments to assess the success of the new word segmentation method we first discuss measures that are used to evaluate the accuracy of automatic segmentationwe then examine the application of the new segmentation method to english text and show how it achieves results that significantly outperform the state of the artnext we describe application to a manually segmented corpus of chinese text again excellent results are achievedin a further experiment where we apply a model generated from the corpus to a new independent test file performance deteriorates considerablyas one might expectwe then apply the method to a different corpus and investigate how well the model transfers from one corpus to anotherwe end with a discussion of how the results vary with the order of the compression model used to drive the segmenterwe use three measures to evaluate the accuracy of automatic segmentation recall precision and error rateall evaluations use handsegmentation as the gold standard which the automatic method strives to attainto define them we use the terms number of words occurring in the handsegmentation number of words incorrectly identified by the automatic method number of words correctly identified by the automatic method recall and precision are standard information retrieval measures used to assess the quality of a retrieval system in terms of how many of the relevant documents are retrieved and how many of the retrieved documents are relevant the overall error rate can be defined as error rate this in principle can give misleading resultsan extreme condition is where the automatic method only identifies a single word leading to a very small error rate of 1n despite the fact that all words but one are misidentifiedhowever in all our experiments extreme conditions do not occur because n is always close to n and we find that the error rate is a useful overall indicator of the quality of segmentationwe also used the fmeasure to compare our results with others if the automatic method produces the same number of words as the handsegmentation recall and precision both become equal to one minus the error ratea perfect segmenter will have an error rate of zero and recall and precision of 100all these measures can be calculated automatically from a machinesegmented text along with the handsegmented gold standardboth texts are identical except for the points where spaces are inserted thus we record just the start and end positions of each word in both versionsfor example quota because aed fquot in the machinesegmented version is mapped to and quota because a ed fquot in the handsegmented version becomes the number of correctly and incorrectly segmented words is counted by comparing these two sets of positions indicated by matched and mismatched pairs respectivelythree correct and two incorrect in this exampleit may be helpful for nonchinese readers to briefly illustrate the success of the space insertion method by showing its application to english textthe first part of table 2 shows the original text with spaces in the proper placesthe second shows the text with spaces removed used as input to the segmentation procedurethe third shows the output of the ppmbased method described above while the fourth shows for comparison the output of a wordbased method for predicting the position of spaces useg for this experiment ppm was trained on the millionword brown corpus useg was trained on a far larger corpus containing 1 gb of data from the tipster collection both were tested on the same 500 kb extract from the wall street journalthe recall and precision for ppm were both 9952 while the corresponding figures for useg were 9356 and 9003 respectivelythis result is particularly noteworthy because ppm had been trained on only a small fraction of the amount of text needed for the wordbased schemethe same example was used by ponte and croft and the improved performance of the characterbased method is evident even in this small examplealthough the word micronite does not occur in the brown corpus it was correctly segmented using ppmlikewise inits was correctly split into in and itsppm makes just two mistakesfirst a space was not inserted into loews corp because the single quotwordquot requires only 543 bits to encode whereas loews corp requires 550 bitssecond an extra space was added to crocidolite because that reduced the number of bits required from 587 to 553our first series of experiments used part of guo jin mandarin chinese ph corpus containing one million words of newspaper stories from the xinhua news agency of pr china written between january 1990 and march 1991it is represented in the standard gb coding schemetable 3 shows the distribution of word lengths in the corpussinglecharacter words are the most frequent these and bigrams together constitute almost 94 of wordsnearly half the characters appear as constituents of twocharacter wordssome published figures for chinese language statistics indicate that this corpus may overrepresent singlecharacter words and underrepresent bigramsfor example liu gives figures for modern chinese of 5 75 14 and 6 for onecharacter twocharacter threecharacter and longer words respectivelyhowever it has been argued that considering the inherent uncertainty in chinese word segmentation generalpurpose segmentation algorithms should segment aggressively rather than conservatively consequently this corpus seems appropriate for our usetable 4 shows the results for five 500word test files from the corpuswe took part of the corpus that was not used for training divided it into 500word segments removed all spaces and randomly chose five segments as test filesthe results show an error rate varying from 12 to 66the resulting fmeasures indicate that the new algorithm performs better than the one described in hockenmaier and brew who report an fmeasure of 879 using trigram rulesthis is particularly significant because the two algorithms use training and test data from the same sourcethe results were also verified by checking them manuallythis produces slightly different results for two reasonsfirstly human judgment sometimes accepts a segmentation as correct even though it does not correspond exactly with the corpus versionfor example the last word in k is counted as correct even though in the corpus it is written pttt k secondly improper segmentations such as and g occur in the corpuswhen the program makes the same mistakes it counts as correct in automatic checking but incorrect in manual checkingthese two kinds of error virtually canceled each other when checked manually file 3 for example has five fewer errors for the first reason and six more for the second reason giving error counts of 21 and 20 for automatic and manual checking respectivelyin a second test models from this corpus were evaluated on completely separate data provided by the institute of computational linguistics of peking universitythis contained 39 sentences some of which are compound sentencessince no presegmented version was available all checking was manualthis test is interesting because it includes several sentences that are easily misunderstood three of which are shown in figure 8in the first which reads quoti have learned a lot from itquot the second and third characters combine into from it and the fourth and fifth characters combine into have learnedhowever the third and fourth characters taken together mean middle school which does not occur in the meaning of the sentencein the second and third sentences the first three characters are the samein the second quotphysics is very hard to learnquot the second and third characters should be separated by a space so that the third character can combine with the following two characters to mean to learnhowever in the third quotphysics is one kind of sciencequot the first three characters make a single word meaning physicsthe error rate recall and precision for this test material are 108 934 and 896 respectivelyperformance is significantly worse than that of table 4 because of the nature of the test fileprecision is distinctly lower than recallrecall fares better because many relevant words are still retrieved whereas precision suffers because the automatic segmenter placed too many word boundaries compared with the manual judgmenttwo aspects of the training data have a profound influence on the model accuracyfirst some errors are obviously caused by deficiencies in the training data such as improperly segmented common words and namessecond some errors stem from the topics covered by the corpusit is not surprising that the error rate increases when the training and testing text represent different topic areassuch as training on news text and testing on medical textthe rocling standard segmentation corpus contains about two million presegmented words represented in the big5 coding schemewe converted it to gb used one million words for training and compared the resulting model to that generated from the ph data also trained on one million wordsboth models were tested on 10 randomly chosen 1000word segments from each corpus the results are shown in table 5 in terms of the mean and standard deviation of the errorswhen the training and testing files come from the same corpus results are good with around 42 and 45 errors per thousand wordsnot surprisingly performance deteriorates significantly when the ph model is used to segment the rocling test files or vice versaseveral differences between the corpora influence performancemany english words are included in rocling whereas in ph only a few letters are used to repof wtiti in the ph corpusquotation marks also differ l 1 in rocling but quot quot in phin addition as is only to be expected in any large collection of natural language typographical errors occur in both corporathe overall result indicates that our algorithm is robustit performs well so long as the training and testing data come from the same source56 effect of the amount of training data for the rocling corpus we experimented with different amounts of training datafour models were trained with successively larger amounts of data 05m 1m 15m and 2m words each training file being an extension of the text in the preceding training filethe four models were tested on the 10 randomlychosen 1000word rocling segments used beforethe results for the individual test files in terms of error rate per thousand words are shown in figure 9 and summarized in table 6larger training sets generally give smaller error which is only to be expectedalthough the results for some individual test files flatten out and show no further improvement with larger training files and in some cases more training data actually increases the number of errorsoverall the error rate is reduced by about 25 for each doubling of the training datawe have experimented with compression models of different orders on the ph corpusgenerally speaking compression of text improves as model order increases up to a point determined by the logarithm of the size of the training texttypically little compression is gained by going beyond order 5 modelsfor segmentation we observe many errors when a model of order 1 is usedfor order 3 models most words are segmented with the same error rate as for order 5 models though some words are missed when order 2 models are usedfigure 10 shows some cases where the order 3 and order 5 models produce different resultssome order 5 errors are corrected by the order 3 model though others appear even with the lowerorder modelfor example both results in the first row are incorrect no space should be inserted in this case and the four characters should stand togetherhowever the order 3 result is to be preferred to the order 5 result because both twocharacter words do at least make sense individually whereas the initial three characters in the order 5 version do not represent a word at allin the second row the order 5 result is incorrect because the second component does not represent a wordin the order 3 result the first word containing two characters is a person namethe second word could also be correct as it stands though it would be equally correct if a space had been inserted between the two bigramson the whole we find that the order 3 model gives the best results overall although there is little difference between orders 3 4 and 5word segmentation forms a valuable component of any chinese digital library systemit improves fulltext retrieval in two ways higherprecision searching and the ability to incorporate relevance rankingthis increases the effectiveness of fulltext search and helps to provide users with better feedbackfor example one study concludes that the performance of an unsegmented characterbased query is about 10 worse than that of the corresponding segmented query many emerging digital library technologies also presuppose word segmentationfor example text summarization document clustering and keyphrase extraction all rely on word frequenciesthese would not work well on unsegmented text because character frequencies do not generally reflect word frequenciesonce the source text in a digital library exceeds a few megabytes fulltext indexes are needed to process queries in a reasonable time fulltext indexing was developed using languages where word boundaries are notated and the techniques that were developed rely on wordbased processingalthough some techniquesfor example stemming and casefoldingare not applicable to chinese information retrieval many areexamples include heuristics for relevance ranking and query expansion using a language thesaurusof course fulltext indexes can be built from individual characters rather than wordshowever these will suffer from the problem of low precisionsearches will return many irrelevant documents where the same characters are used in contexts different from that of the queryto reduce false matches to a reasonable level auxiliary indexes will have to be createdthese will be much larger than regular wordbased indexes of paragraphs or documents and will still not be as accurateinformation retrieval systems often rank the results of each search giving preference to documents that are more relevant to the query by placing them nearer the beginning of the listrelevance metrics are based on the observation that infrequent words are more important than common ones and should therefore rate more highlyword segmentation is essential for this purpose because the relationship between the frequency of a word and the frequency of the characters that appear within it is often very weakwithout word segmentation the precision of the result set will be reduced because relevant documents are less likely to be close to the top of the listl4 for example the word 11 el is an infrequent word that appears only twenty times in the ph corpusbut its two characters occur frequently i 13531 times and er 45010 timesin fact iii is the second most frequent character in the corpus appearing in 443 separate wordscharacterbased ranking would place little weight on these two characters even though they are 4extremely important if the query is the word t is another frequent character appearing 4553 times in the ph corpushowever in 4481 of those cases it appears by itself and contributes little to the meaning of the textif a query contained both of these words far more weight would be given to than to the individual characters in iiword counts also give feedback on the effectiveness of a querythey help users judge whether their query was too wide or too narrow and provide information on which of the terms are most appropriatewordbased processing is essential to a number of emergent new technologies in the digital library fieldstatistical approaches are enjoying a resurgence in natural language analysis examples include text summarization document clustering and keyphrase extractionall of these statistical approaches are based on words and word frequenciesfor instance keywords and keyphrases for a document can be determined automatically based on features such as the frequency of the phrase in the document relative to its frequency in an independent corpus of like material and its position of occurrence in the document a decomposition of text into its constituent words is an essential prerequisite for the application of such techniquesthe problem of word segmentation of chinese text is important in a variety of contexts particularly with the burgeoning interest in digital libraries and other systems that store and process text on a massive scaleexisting techniques are either linguistically based using a dictionary of words or rely on handcrafted segmentation rules or use adaptive models that have been specifically created for the purpose of chinese word segmentationwe have developed an alternative based on a generalpurpose characterlevel model of textthe kind of models used in the very best text compression schemesthese models are formed adaptively from training textthe advantage of using characterlevel models is that they do not rely on a dictionary and therefore do not necessarily fail on unusual wordsin effect they can fall back on general properties of language statistics to process novel textthe advantage of basing models on a corpus of training text is that particular characteristics of the text are automatically taken into account in language statisticsas exemplified by the significant differences between the models formed for the ph and rocling corporaencouraging results have been obtained using the new schemeour results compare very favorably with the results of hockenmaier and brew on the ph corpus unfortunately no other researchers have published quantitative results on a standard corpusfurther work is needed to analyze the results of the rocling corpus in more detailthe next step is to use automatically segmented text to investigate the digital library applications we have described information retrieval text summarization document clustering and keyphrase extractionwe are grateful to stuart inglis hong chen and john cleary who provided advice and assistancethe corrected version of guo jin ph corpus and the rocling corpus were provided by julia hockenmaier and chris brew at the university of edinburgh and the chinese knowledge information processing group of academia sirtica respectivelythe institute of computational linguistics of peking university also provided some test materialbill teahan acknowledges the generous support of the department of information technology lund university swedenthanks also to the anonymous referees who have helped us to improve the paper significantly
J00-3004
a compressionbased algorithm for chinese word segmentationchinese is written without using spaces or other word delimitersalthough a text may be thought of as a corresponding sequence of words there is considerable ambiguity in the placement of boundariesinterpreting a text as a sequence of words is beneficial for some information retrieval and storage tasks for examplefulltext search wordbased compression and keyphrase extractionwe describe a scheme that infers appropriate positions for word boundaries using an adaptive language model that is standard in text compressionit is trained on a corpus of presegmented text and when applied to new text interpolates word boundaries so as to maximize the compression obtainedthis simple and general method performs well with respect to specialized schemes for chinese language segmentationour ngram generative language modeling based approach does not use domain knowledge
an empirically based system for processing definite descriptions we present an implemented system for processing definite descriptions in arbitrary domains the design of the system is based on the results of a corpus analysis previously reported which highlighted the prevalence of discoursenew descriptions in newspaper corpora the annotated corpus was used to extensively evaluate the proposed techniques for matching definite descriptions with their antecedents discourse segmentation recognizing discoursenew descriptions and suggesting anchors for bridging descriptions universidade do vale do rio dos sinos university of edinburgh we present an implemented system for processing definite descriptions in arbitrary domainsthe design of the system is based on the results of a corpus analysis previously reported which highlighted the prevalence of discoursenew descriptions in newspaper corporathe annotated corpus was used to extensively evaluate the proposed techniques for matching definite descriptions with their antecedents discourse segmentation recognizing discoursenew descriptions and suggesting anchors for bridging descriptionsmost models of definite description processing proposed in the literature tend to emphasise the anaphoric role of these elementsthis approach is challenged by the results of experiments we reported previously in which subjects were asked to classify the uses of definite descriptions in wall street journal articles according to schemes derived from proposals by hawkins and prince the results of these experiments indicated that definite descriptions are not primarily anaphoric about half of the time they are used to introduce a new entity in the discoursein this paper we present an implemented system for processing definite descriptions based on the results of that earlier studyin our system techniques for recognizing discoursenew descriptions play a role as important as techniques for identifying the antecedent of anaphoric onesa central characteristic of the work described here is that we intended from the start to develop a system whose performance could be evaluated using the texts annotated in the experiments mentioned aboveassessing the performance of an nlp system on a large number of examples is increasingly seen as a much more thorough evaluation of its performance than trying to come up with counterexamples it is considered essential for language engineering applicationsthese advantages are thought by many to offset some of the obvious disadvantages of this way of developing nlp theoriesin particular the fact that given the current state of language processing technology many hypotheses of interest cannot be tested yet as a result quantitative evaluation is now commonplace in areas of language engineering such as parsing and quantitative evaluation techniques are being proposed for semantic interpretation as well for example at the sixth and seventh message understanding conferences which also included evaluations of systems on the socalled coreference task a subtask of which is the resolution of definite descriptionsthe system we present was developed to be evaluated in a quantitative fashion as well but because of the problems concerning agreement between annotators observed in our previous study we evaluated the system both by measuring precisionrecall against a quotgold standardquot as done in muc and by measuring agreement between the annotations produced by the system and those proposed by the annotatorsthe decision to develop a system that could be quantitatively evaluated on a large number of examples resulted in an important constraint we could not make use of inference mechanisms such as those assumed by traditional computational theories of definite description resolution too many facts and axioms would have to be encoded by hand for theories of this type to be tested even on a mediumsized corpusour system therefore is based on a shallowprocessing approach more radical even than that attempted by the first advocate of this approach carter or by the systems that participated in the muc evaluations since we made no attempt to finetune the system to maximize performance on a particular domainthe system relies only on structural information on the information provided by preexisting lexical sources such as wordnet on minimal amounts of general handcoded information or on information that could be acquired automatically from a corpusas a result the system does not really have the resources to correctly resolve those definite descriptions whose interpretation does require complex reasoning we nevertheless developed heuristic techniques for processing these types of definites as well the idea being that these heuristics may provide a baseline against which the gains in performance due to the use of commonsense knowledge can be assessed more clearly2 the paper is organized as follows we first summarize the results of our previous corpus study and then discuss the model of definite description processing that we adopted as a result of that work and the general architecture of the system in section 4 we discuss the heuristics that we developed for resolving anaphoric definite descriptions recognizing discoursenew descriptions and processing bridging descriptions and in section 5 how the performance of these heuristics was evaluated using the annotated corpusfinally we present the final configuration of the two versions of the system that we developed review other systems that perform similar tasks and present our conclusions and indicate future work as mentioned above the architecture of our system is motivated by the results concerning definite description use in our corpus discussed in poesio and vieira in this section we briefly review the results presented in that paper2 in fact it is precisely because we are interested in identifying the types of commonsense reasoning actually used in language processing that we focused on definite descriptions rather than on other types of anaphoric expressions that can be processed much more effectively on the basis of syntactic information alone we used a subset of the penn treebank i corpus from the acldci cdrom containing newspaper articles from the wall street journalwe divided the corpus into two parts one containing about 1000 definite descriptions was used as a source during the development of the system we will refer to these texts as corpus 14 the other part containing about 400 definite descriptions was kept aside during development and used for testing we will refer to this subset as corpus 24 the bestknown studies of definite description use classify definite descriptions on the basis of their relation with their antecedenta fundamental distinction made in these studies is between descriptions that denote the same discourse entity as their antecedent descriptions that denote an object that is in some way quotassociatedquot with the antecedentfor example it is part of it as in a car the wheel and descriptions that introduce a new entity into the discoursein the case of semantic identity between definite description and antecedent a further distinction can be made depending on the semantic relation between the predicate used in the description and that used for the antecedentthe predicate used in an anaphoric definite description may be a synonym of the predicate used for the antecedent a generalization hypernym and even sometimes a specializationhyponym in fact the np introducing the antecedent may not have a head noun at all eg when a proper name is used as in bill clinton the presidentwe will use the term direct anaphora when both description and antecedent have the same head noun as in a house the housedirect anaphors are the easiest definite descriptions for a shallow system to resolve in all other cases as well as when the antecedent and the definite description are related in a more indirect way lexical knowledge or more generally encyclopedic knowledge is neededall of the classifications mentioned above also acknowledge the fact that not all definite descriptions depend on the previous discourse for their interpretationsome refer to an entity in the physical environment others to objects which are assumed to be known on the basis of common knowledge and still others are licensed by virtue of the semantics of their head noun and complement in the experiments discussed in poesio and vieira we asked our subjects to classify all definite description uses in our two corporathese experiments had the dual objective of verifying how easy it was for human subjects to agree on the distinctions between definite descriptions just discussed and producing data that we could use to evaluate the performance of a systemthe classification schemes we used were simpler than those proposed in the literature just mentioned and were motivated on the one hand by the desire to make the annotation uncomplicated for the subjects employed in the empirical analysis and on the other hand by our intention to use the annotation to get art estimate of how well a system using only limited lexical arid encyclopedic knowledge could dowe ran two experiments using two slightly different classification schemesin the first experiment we used the following three classes in the second experiment we treated all anaphoric definite descriptions as part of one class and all inferrables as part of a different class without significant changes in the agreement resultsagreement among annotators was measured using the k statistic k measures agreement among k annotators over and above chance agreement the k coefficient of agreement is defined as where p is the proportion of times the annotators agree and p the proportion of times that we would expect them to agree by chancethe interpretation of k figures is art open question but in the field of content analysis where reliability has long been an issue k 08 is generally taken to indicate good reliability whereas 068 k 08 allows tentative conclusions to be drawncarletta et al observe however that in other areas such as medical research much lower levels of k are considered acceptable an interesting overall result of our study was that the most reliable distinction that our annotators could make was that between firstmention and subsequentmention the measure of agreement for the threeway distinction just discussed was k 073the second interesting result concerned the distribution of definite descriptions in the three classes above we found that about half of the definite descriptions were discoursenewthe distribution of the definite descriptions in classes in our first experiment according to annotators a and 13 are shown in tables 1 and 2 respectivelythe third main result was that we found very little agreement between our subjects on identifying briding descriptions in our second experiment the agreement on bridging descriptions was k 024this was due in part to the fact that many definite descriptions could be classified in more than one class and in part to the fact that in the case of descriptions indirectly related to their antecedents the discourse might provide more than one distinct equally suitable anchor the most common classification problem is distinguishing between larger situation and bridging descriptions see also fraurud and poesio and vieira the results just discussed led us to adopt a model of definite descriptions processing advanced in fraurud and further elaborated in poesio and vieira according to which interpreting definite descriptions in written discourse is not just a matter of checking whether there is a suitable antecedent for the description but also involves a classification task recognizing whether a description is in fraurud terms firstmention or subsequentmentionor in our terminology direct anaphora discoursenew or bridgingthe crucial aspect of fraurucl proposal is the idea that interpreting definite descriptions is not just a matter of looking for an antecedent separate rules for recognizing firstmention definite descriptions are needed as wellthe fact that there was so much disagreement about bridging descriptions and their anchors led us to try to keep the rules for processing them fairly separate from those for processing other types of descriptions and to attempt to use agreement measures to evaluate the performance of the system in addition to more traditional precision and recall figuresthe results discussed above further support fraurud criticism of the approach to processing definite nps based on the assumption that they are primarily anaphoricbecause of the large proportion of firstmention definites found in the texts she examined fraurud claims that a model where the processing of firstmention definites always involves a failing search for an already established discourse referent as a first step seems less attractivea reverse ordering of the procedures is quite obviously no solution to this problem but a simultaneous processing as proposed by bosch and geurts might befraurud proposes contra heim that processing a definite np may involve establishing a new discourse entity8 this new discourse entity may then be linked to one or more anchors in the text or to a background referentfraurud discusses the example of the description the king interpreted relationally encountered in a text in which no king has been previously mentionedlexicoencyclopedic knowledge would provide the information that a king is related to a period and a country these would constitute the anchorsthe selection of the anchors would identify the pertinent period and country and this would make possible the identification of a referent say for the anchors 1989 and sweden the referent identified would be carl gustav xvi the most interesting aspect of fraurucl proposal is the hypothesis that firstmention definites are not necessarily recognized simply because no suitable antecedent has been found independent strategies for recognizing them may be involvedthis hypothesis is consistent with lobner proposal that the fundamental property of a definite description is that it denotes a function this function can be part of the meaning assigned to the definite description by the grammar or can be specified by context fraurud and lobner ideas can be translated into a requirement that a system have separate methods or rules for recognizing discoursenew descriptions in addition to rules for resolving anaphoric definite descriptions these rules may run in parallel with the rules for resolving anaphoric definites rather than after themrather than deciding a priori on the question of whether the heuristic rules for identifying discoursenew descriptions should be run in parallel with resolution or after it we treated this as an empirical questionwe made the architecture of the system fairly modular so that we could both try different heuristics and try applying them in a different order using the corpus for evaluationwe discuss all the heuristics that we tried in section 4 and our evaluation of them in section 5the overall architecture of our system is shown in figure 1the system attempts to classify each definite description as either direct anaphora discoursenew or bridging descriptionin addition to this classification the system tries to identify the antecedents of anaphoric descriptions and the anchors of bridging descriptionsthe system processes parsed newswire texts from the penn treebank i constructing a fairly simple discourse model that consists of a list of discourse entities that may serve as potential antecedents according to the chosen segmentation algorithm the system uses the discourse model syntactic information and a small amount of lexical information to classify definite descriptions as discoursenew or to link them to anchors in the text wordnet is also consulted by the version of the system that attempts to resolve bridging descriptionsthe system is implemented in sicstus prologinputthe texts in the penn treebank corpus consist of parsed sentences represented as lisp listsduring a preprocessing phase a representation in prolog list format is produced for each sentence and the noun phrases it contains are extractedthe output of this preprocessing phase is passed to the system properfor example the sentence in is represented in the treebank as and the input to the system after the preprocessing phase is note that all nested nps are extracted and that embedded nps such as the organization of petroleum exporting countries are processed before the nps that embed them np the squabbling pp within np the organization pp of np petroleum exporting countries np control b5npmideastpoliticshavevpcalmed ppdownthandsinpthesquabblingdpwithin nptheorganizationppof14ppetroleumexporting countriesvpseemsppundernpcontrol ppfornowoutputthe system outputs the classification it has assigned to each definite description in the text together with the coreferential and bridging links it has identifiedwe developed three types of heuristics we present in turn the heuristics for each class of definite descriptions in this section and discuss their limitationsthe final configuration of the system was arrived at on the basis of an extensive evaluation of the heuristics using the corpus annotated in our previous work the evaluation was used both to determine which version of each heuristic worked better and to identify the best order in which to try themour system strategy for resolving direct anaphora is very simple it just looks for a potential antecedent whose head matches the head noun of the definite descriptionthe key issues to address in doing this are cnp the squabbling ppwithin nptheorganization although this strategy works most of the time it does have some problemsone of these problems are headless definites a second problem are definites whose head is not represented in the treebank as an atom at the determiner level corpus 1 for example contains 17 definite descriptions with these problems a third problem we found that different recallprecision tradeoffs can be achieved depending on the choice of potential antecedentsie depending on whether all nps are considered as possible antecedents or only indefinite nps or various other subsetsso we ran experiments to identify the best group of potential antecedentsfour different np subsets were considered the results obtained by considering each subset of the total number of nps as potential antecedents are discussed in section 52413 segmentationthe set of potential antecedents of anaphoric expressions is also restricted by the fact that antecedents tend to have a limited quotlife spanquotie they only serve as antecedents for anaphoric expressions within pragmatically determined segments of the whole text in our corpus we found that about 10 of direct anaphoric definite descriptions have more than one possible antecedent if segmentation is not taken into account in for example the antecedent of the housej mentioned in sentence 50 is not the house mentioned earlier in sentences 2 and 19 but another house implicitly introduced in sentence 49 by the reference to the yard lurched two feet off its foundation during last week earthquake19others grab books records photo albums sofas and chairs working frantically in the fear that an aftershock will jolt the house i again50the house itself located about 50 yards from the collapsed section of doubledecker highway interstate 880 was pushed about four feet off its foundation and then collapsed into its basement65as ms johnson stands outside the hammack house after winding up her chores there the house begins to creak and swayin general it is not sufficient to look at the most recent antecedents only this is because segments are organized hierarchically and the antecedents introduced in a segment at a lower level are typically not accessible from a segment at a higher level whereas the antecedents introduced in a prior segment at the same level may belater in for example the house in sentence 50 becomes inaccessible again and in sentence 65 the text starts referring again to the house introduced in sentence 2automatically recognizing the hierarchical structure of texts is an unresolved problem as it involves reasoning about intentions14 better results have been achieved on the simpler task of quotchunkingquot the text into sequences of segments generally by means of lexical density measures the methods for limiting the life span of discourse entities that we considered for our system are even simplerone type of heuristic we looked at are windowbased techniques ie considering only the antecedents within fixedsize windows of previous sentences although we allow some discourse entities to have a longer life span we call this method loose segmentationmore specifically a discourse entity is considered a potential antecedent for a definite description when the antecedent head is identical to the description head and we also considered an even simpler recency heuristic this involves keeping a table indexed by the heads of potential antecedents such that the entry for noun n contains the index of the last occurrence of an antecedent with head n finally we considered combinations of segmentation and recency like himself could run pinkerton better than an unfocused conglomerate or investment bankerthe heuristic method we developed to deal with postmodification is to compare the description and antecedent preventing resolution in those cases where both are postmodified and the modifications are not the sameas mentioned above a fundamental characteristic of our system is that it also includes heuristics for recognizing discoursenew descriptions on the basis of syntactic and lexical features of the noun phraseour heuristics are based on the discussion in hawkins who identified a number of correlations between certain types of syntactic structure and discoursenew descriptions particularly those he called quotunfamiliarquot definites including15 the occurrence of premodifiers such as first or best when accompanied with full relatives eg the first person to sail to america finally we found that three classes of what hawkins called quotlarger situationquot definites can also be recognized on the basis of heuristics exploiting syntactic and lexical features in our corpus study we found that our subjects did much better at identifying discoursenew descriptions all together than they did at distinguishing unfamiliar from larger situation cases this finding was confirmed by our implementation although each of the heuristics is designed in principle to identify only one of the uses they work better at identifying together the whole class of discoursenew descriptions identified by comparing the head noun or modifiers of the definite np with a list of predicates that are either functional or likely to take a complement our list of predicates that when taking np complements are generally used to introduce discoursenew entities was compiled by hand and currently includes the nouns fact result conclusion idea belief saying and remarkin these cases what licenses the use of a definite is not anaphoricity but the fact that the head noun can be interpreted as 16 in the systems participating in muc definite descriptions occurring in appositions are treated as anaphoric on the preceding np our system considers the np and the apposition as a unit that introduces a new referent to the discoursevieira and poesio processing definite descriptions semantically functional the noun complement specifies the argument of the functionfunctionality is enough to license the use of the definite description an example of definite description classified as discoursenew on these grounds is given in when encountering a definite whose head noun occurs in this list the system checks if a complement is present or if the definite appears in a copular construction a second list of special predicates consulted by the system includes what hawkins called unexplanatory modifiers these include adjectives such as first last best most maximum minimum and only and superlatives in generalall of these adjectives are predicate modifiers that turn a head noun into a function therefore againaccording to lobrterlicensing the use of a definite even when no antecedent is present when applying this heuristic the system verifies the presence of a complement for some of the modifiers but not for superlativesfinally our system uses a list of special predicates that we found to correlate well with larger situation uses this list consists mainly of terms indicating time reference and includes the nouns hour time morning afternoon night day week month period quarter year and their respective pluralsan example from the corpus is only 14505 wells were drilled for oil and natural gas in the yous in the first nine months of the yearother definites typically used with a larger situation interpretation are the moon the sky the pope and the weatherit should be noted that although these constructions may indicate a discoursenew interpretation these expressions may also be used anaphorically this is one of the cases in which a decision has to be made concerning the relative priority of different heuristicswe discuss this issue further in connection with the evaluation of the system performance in section 518 422 restrictive and nonrestrictive modificationa second set of heuristics for identifying discoursenew descriptions that we derived from hawkins suggestions and 17 the list should be made more comprehensive so far it includes the cases observed in the corpus analysis and a few other similar modifiers15 more recently bean and riloff have proposed methods for automatically extracting from a corpus heads that correlate well with discourse novelty from our corpus analysis look for restrictive modificationquot we developed patterns to recognize restrictive postmodification and nonrestrictive postmodification we also tested the correlation between discourse novelty and premodificationwe discuss each of these heuristics in turnrestrictive postmodificationhawkins pointed out that unfamiliar defirtites often include referentestablishing relative clauses and associative clauses while warning that not all relative clauses are referentestablishingsome statistics about this correlation were reported by fraurud she found that in her corpus 75 of complex definite nps were firstmentiona great number of definite descriptions with restrictive postmodifiers are unfamiliar in our corpus as well in fact restrictive postmodification was found to be the single most frequent feature of firstmention descriptionsconstructions of this type are good indicators of discourse novelty because a restrictive poshnodifier may license the use of a definite description either by providing a link to the rest of the discourse or by making the description into a functional conceptlooking for restrictive postmodifiers might therefore be a good way of identifying discoursenew descriptionsthe distribution of restrictive postmodifiers in our corpus is shown in table 3 examples of each type of postmodifier are given belowrelative clauses these are finite clauses sometimes introduced by relative pronouns such as who whom which where when why and that prepositional phrases and ofclauses quirk et al found that prepositional phrases are the most common type of postmodification in englishthree or four times more frequent than either finite or nonfinite clausal postmodificationthis was confirmed by our corpus study the modification is nonrestrictive when the head provides sufficient information to identify the discourse entity so that the information provided by the modification is not essential for identification descriptions are shown in table 4 ofclauses are the most commonour program uses the following patterns to identify restrictive postmodifiers in the treebank sometimes the modified np is embedded in another np so structures like are also considered npnpthe_premodifiers__head_clauseflnonrestrictive postmodificationwe found it important to distinguish restrictive from nonrestrictive postmodification since in our corpus definite descriptions with nonrestrictive postmodifiers were generally not discoursenewour system recognizes nonrestrictive postrnodifiers by the simple yet effective heuristic of looking for commasthis heuristic correctly recognizes nonrestrictive postmodification in cases like the substance discovered almost by accident is very important which are annotated in the penn treebank i as follows 20 note that an ni may have zero one or more prernodifiersrestrictive prernodificationrestrictive modification is not as common in prenominal position as in posthead position but it is often used and was also found to correlate well with larger situation and unfamiliar uses of definite descriptions a restrictive premodifier may be a noun a proper noun or an adjectivesometimes numerical figures are used as restrictive premodifiers as in a native of the area he is back now after riding the oilfield boom to the top then surviving the bust running an oklahoma city convenience store the 1987 stock market crash the heuristic we tested was to classify definite descriptions premodified by a proper noun as larger situation423 appositionsduring our corpus analysis we found additional syntactic patterns that appeared to correlate well with discourse novelty yet had not been discussed by hawkins such as definite descriptions occurring in appositive constructions they usually refer to the np modified by the apposition therefore there is no need for the system to look for an antecedentappositive constructions are treated in the treebank as np modifiers therefore the system recognizes an apposition by checking whether the definite occurs in a complex noun phrase with a structure consisting of a sequence of noun phrases one of which is a name or is premodified by a name as in the examples in pp of npphillipspetroletuniii c the oboist heinz holliger d nu nptheoboist npheinzholliger in fact a definite description may itself be modified by an apposition eg an indefinite np as shown by such cases of appositive constructions are also recognized by the system the sandhills luncheon cafe a tin building in midtownother examples of apposition recognized by the system are a the very countercultural chamber group tashi b the new chancellor john major c the sharpshooter a freshly drilled oil well two miles deep 21 our system cannot distinguish adjectives or verbs from nouns in premoclification because it works directly off the parsed version of the treebank without looking at partofspeech tags often involve discoursenew descriptionswe developed the following heuristic for handling copula constructionsif a description occurs in subject position the system looks at the vpif the head of the vp is the verb to be to seem or to become and the complement of the verb is not an adjectival phrase the system classifies the description as discoursenewtwo examples correctly handled by this heuristic are shown in the syntactic representation of these cases in the penn treebank i is shown in if the complement of the verb is an adjective the subject is typically interpreted referentially and should not be considered discoursenew on the basis of its complement adjectival complements are represented as follows in the treebank the first appearance of these definite descriptions in the text is usually a discoursenew description subsequent mentions of proper names are regarded as cases of anaphorato recognize proper names the system simply checks whether the head is capitalizedif the test succeeds the definite is classified as a larger situation usebridging descriptions are the definite descriptions that a shallow processing system is least equipped to handlelinguistic and computational theories of bridging descriptions identify two main subtasks involved in their resolution finding the element in the text to which the bridging description is related and identifying the relation holding between the bridging description and its anchor the speaker is licensed to use a bridging description when he or she can assume that the commonsense 22 note that this test is performed just after trying to find an antecedent so that the second instance of the same proper noun will be classified as an anaphoric use knowledge required to identify the relation is shared by the listener this dependence on commonsense knowledge means that in general a system can only resolve bridging descriptions when supplied with an adequate knowledge base for this reason the typical way of implementing a system for resolving bridging descriptions has been to restrict the domain and feed the system with handcoded world knowledge a broader view of bridging phenomena is presented in hahn strube and markert they make use of a knowledge base from which they extract conceptual links to feed an adaptation of the centering model the relation between bridging descriptions and their anchors may be arbitrarily complex and the same description may relate to different anchors in a text this makes it difficult to decide what the intended anchor and the intended link are for all these reasons this class has been the most challenging problem we have dealt with in the development of our system and the results we have obtained so far can only be considered very preliminarynevertheless we feel that trying to process these definite descriptions is the only way to discover which types of commonsense knowledge are actually neededour work on bridging descriptions began with the development of a classification of bridging descriptions according to the kind of information needed to resolve them rather than on the basis of the possible relations between descriptions and their anchors as is typical in the literaturethis allowed us to get an estimate of what types of bridging descriptions we might expect our system to resolvethe classification is as follows in the rest of this section we describe the heuristics we developed for handling the first three of these classes lexical bridges bridges based on names and bridges to entities introduced by nonhead nouns in a compound nominal 441 bridging descriptions and wordnetin order to get a system that could be evaluated on a corpus containing texts in different domains we used wordnet as an approximation of a lexical knowledge sourcewe developed a wordnet interface that reports a possible semantic link between two nouns when one of the following is true sometimes finding a relation between two predicates involves complex searches through wordnet hierarchyfor example there may be no relation between two head nouns but there is a relation between compound nouns in which these nouns appear thus there is no semantic relation between recordalbum but only a synonymy relation between record _albumalbum we found that extended searches of this type or searches for indirect meronymy relations yielded extremely low recall and precision at a very high computational cost both types of search were dropped at the begirming of the tests we ran to process the corpus consulting wordnet the results of our tests with wordnet are presented in section 54 that refer back to entities introduced by proper names are very common in newspaper articlesprocessing such descriptions requires determining an entity type for each name in the text that is if we recognize pinkerton inc as an entity of type company we can then resolve the subsequent description the company or even a description such as the firm by finding a synonymy relation between company and firm using wordnetthis socalled named entity recognition task has received considerable attention recently and was one of the tasks evaluated in the sixth and seventh message understanding conferencesin muc6 15 different systems participated in the competition for the version of the system discussed and evaluated here we implemented a preliminary algorithm for named entity recognition that we developed ourselves a more recent version of the system uses the named entity recognition software developed by hcrc for the muc7 competition wordnet contains the types of a few namestypically of famous people countries states cities and languagesother entity types can be identified using appositive constructions and abbreviations as cuesour algorithm for assigning a type to proper names is based on a mixture of the heuristics just describedthe system first looks for the abovementioned cues to try to identify the name typeif no cue is found pairs consisting of the proper name and each of the elements from the list country city state continent language person are consulted in our wordnet interface to verify the existence of a semantic relationthe recall of this algorithm was increased by including a backtracking mechanism that reprocesses a text filling in the discourse representation with missing name typeswith this mechanism we can identify later the type for the name morishita in a textual sequence in which the first occurrence of the name does not provide surface indication of the entity type eg morishita mr morishitathe second mention includes such a clue by processing the text twice we recover such missing typesafter finding the types for names the system uses the techniques previously described for samehead matching or wordnet lookup to match the descriptions with the types found for previous named entities443 compound nounssometimes the anchor for a bridging description is a nonhead noun in a compound noun stock market crash the markets one way to process these definite descriptions would be to update the discourse model with discourse referents not only for the np as a whole but also for the embedded nounsfor example after processing stock market crash we could introduce a discourse referent for stock market and another discourse referent for stock market crash23 the description the markets would be coreferring with the first of these referents and then we could simply use our anaphora resolution algorithmsthis solution however makes available discourse referents that are generally inaccessible for anaphora for example it is generally accepted that in a deer is not accessible for anaphoric reference i saw la deeri hunter1t7 was deadtherefore we followed a different routeour algorithm for identifying anchors attempts to match not only heads with heads but alsoin this section we discuss the tests we ran to arrive at a final configuration of the systemthe performance of the heuristics discussed in section 4 was evaluated by comparing the results of the system with the human annotation of the corpus produced during the experiments discussed in poesio and vieira several variants of our heuristics were tried using corpus 1 as training data after deciding upon an optimal version our algorithms were evaluated using corpus 2 as test databecause our proposals concerning bridging descriptions are much less developed than those concerning anaphoric descriptions and discoursenew descriptions we ran separate evaluations of two versions of the system version 1 which does not attempt to resolve bridging descriptions and version 2 which does we will point out below which version of the system is considered in each evaluationthe fact that the annotators working on our corpus did not always agree either on the classification of a definite description or on its anchor raises the question of how to evaluate the performance of our systemwe tried two different approaches evaluating the performance of the system by measuring its precision and recall against a standardized annotation based on majority voting and measuring the extent of the system agreement with the rest of the annotators by means of the same metric used to measure agreement among the annotators themselves we used the first form of evaluation to measure both the performance of the single heuristics and the performance of the system as a whole the agreement measure was only used to measure the overall performance of the systemwe discuss each of these in turnquot 511 precision and recallrecall and precision are measures commonly used in information retrieval to evaluate a system performancerecall is the percentage of correct answers reported by the system in relation to the number of cases indicated by the annotated corpus r number of correct responses number of cases whereas precision is the percentage of correctly reported results in relation to the total reported p number of correct responses number of responses these two measures may be combined to form one measure of performance the f measure which is computed as follows w represents the relative weight of recall to precision and typically has the value 1a single measure gives us a balance between the two results 100 of recall may be due to a precision of 0 and vice versathe f measure penalizes both very low recall and very low precision and recall figures for the different variants of the system were obtained by comparing the classification produced by each version with a standardized annotation extracted from the annotations produced by our human annotators by majority judgement whenever at least two of the three coders agreed on a class that class was chosendetails of how the standard annotation was obtained are given in vieira 2 the system performance as a classifier was automatically evaluated against the standard annotation of the corpus as followseach np in a text is given an index when a text is annotated or processed the coder or system associates each index of a definite description with a type of use both the standard annotation and the system output are represented as prolog assertionsto assess the system performance on the identification of a coreferential antecedent it is necessary to compare the links that indicate the antecedent of each description classified as anaphorathese links are also represented as prolog assertions as follows the system uses these assertions to build an equivalence class of discourse entities called a coreference chainwhen comparing an antecedent indicated by the system for a given definite description with that in the annotated corpus the corresponding coreference chain is checkedthat is the system indexes and the annotated indexes do not need to be exactly the same as long as they belong to the same coreference chainin this way both and would be evaluated as correct answers if the corpus is annotated with the links shown in in the end we still need to check the results manually because our annotated coreference chains are not complete our annotators did not annotate all types of anaphoric expressions so it may happen that the system indicates as antecedent an element outside an annotated coreference chain such as a bare noun or possessivein for example suppose that all references to the house are coreferential corefif np 135 is indicated as the antecedent for np 154 in the corpus annotation and the system indicates 140 as the antecedent for 154 an error is reported by the automatic evaluation even though all of these nps refer to the same entitya second consequence of the fact that the coreference chains in our standard annotation are not complete is that in the evaluation of direct anaphora resolution we only verify if the antecedents indicated are correct we do not evaluate how complete the coreferential chains produced by the system areby contrast in the evaluation of the muc coreference task where all types of referring expressions are considered the resulting coreference chains are evaluated rather than just the indicated antecedent even our limited notion of coreference chain was nevertheless very helpful in the automatic evaluation considerably reducing the number of cases to be checked manually agreement between our annotators in poesio and vieira was often only partial in addition to precision and recall measures we evaluated the system performance by measuring its agreement with the annotators using the k statistic we used in poesio and vieira to measure agreement among annotatorsbecause the proper interpretation of k figures is still open to debate we interpret the k figures resulting from our tests comparatively rather than absolutely we now come to the results of the evaluation of alternative versions of the heuristics dealing with the resolution of direct anaphora discussed in section 41the optimal version of our system is based on the best results we could get for resolving direct anaphora because we wanted to establish the coreferential relations among discourse nps as precisely as possiblewith loose segmentation it is possible for the system to identify more than one coreference link for a definite description all antecedents satisfying the requirements within the current window will be indicated as a possible antecedenttherefore when evaluating the system results we may find that all antecedents indicated for the resolution of a description were right or some were right and some wrong or that all were wrongthe recall and precision figures reported here relate to those cases were all resolutions indicated were right according to the annotated corpusin section 41 we also discussed a second segmentation heuristic which we called recency the system does not collect all candidate nps as potential antecedents but only keeps the last occurrence of an np from all those having the same head noun and there are no restrictions regarding the antecedent distancethe results of these two methods for different window sizes are shown in table 5the results in this table were obtained by considering as potential antecedents indefinites possessives and definite descriptions as in vieira and poesio we also used the premodification heuristics proposed therealternatives to these heuristics were also evaluated the results are discussed later in this sectionthe resulting f measures were almost the same for all heuristics but there was clearly an increase in recall with a loss of precision when enlarging the window sizequot the recency heuristic had the best recall but the lowest precision although not much lower than the othersthe best precision was achieved with a onesentence window and recall was not dramatically affected but this only happened because the window size constraint was relaxedto show what happens when a strict version of the windowbased segmentation approach is used consider table 6as the table shows this form of segmentation results in higher precision but has a strong negative effect on recallthe overall f values are all worse than for the heuristics in table 5finally we tried a combination of the recency and segmentation heuristics just one potential antecedent for each different head noun is available for resolution the last 27 in our experiments small differences in recall precision and f measures are frequentwe generally assume in this paper that such differences are not significant but a more formal significance test along the lines of that in chinchor will eventually be necessary to verify this occurrence of that head nounthe resolution still respects the segmentation heuristic the results are presented in table 7this table shows that by combining the recency and loose segmentation approaches to segmentation we obtain a better tradeoff between recall and precision than using each heuristic separatelythe version with higher f value in table 7 was chosen as standard and used in the tests discussed in the rest of this section of potential antecedents discussed in section 41 using foursentencewindow loose segmentation with recencyin an earlier version of the system only those definite descriptions that were not resolved with a samehead antecedent were considered as potential antecedents resolved definite descriptions would be linked to previous nps but would not be made available for subsequent resolutionan important difference between that implementation and the current one is that in the new version the definites resolved by the system are also made available as potential antecedents of subsequent definitesthis is because in our previous prototype errors in identifying an indefinite antecedent were sometimes propagated through a coreference chain so that the right antecedent would be missedthe results are shown in table 8if we only consider indefinites as potential antecedents recall is extremely low we also get the worst precisionin other words considering only indefinites for the resolution of definite descriptions is too restrictive this is because our corpus contains a large number of firstmention definite descriptions that serve as antecedents for subsequent references the version with the highest precision is the one that only considers indefinites and definite descriptions as antecedents but recall is lower compared to the version that considered other npswe chose as the basis for further testing a version that combines nearoptimal values for f and precision ie the version that takes indefinites definite descriptions and possessives 523 premodifiersfinally we tested our heuristics for dealing with premodifierswe tested the matching algorithm from vieira and poesio in the present version of the system the results are presented in table 9in that table we also show the results obtained with a modified matching algorithm including a third rule which allows a premodified antecedent to match with a definite whose set of premodifiers is a superset of the set of modifiers of the antecedent we tested each of these three heuristics alone and in combinationthe main result of this evaluation is that using a modified segmentation heuristic reduces the overall impact of the heuristics for premodification on the performance of the algorithm in comparison with the system discussed in vieira and poesio the best precision is still achieved by the matching algorithm that does not allow for new information in the anaphoric expression but the best results overall are again obtained by combining rule 1 and rule 2 although either 2 or 3 works equally well when combined with 1heuristic 2 and 3 alone are counterintuitive and indeed give the poorest results however the impact is greater on recall than precision which suggests that the introduction of new information in noun modification is not very frequentone of the problems with our premodifier heuristics is that although a difference in premodification usually indicates noricoreference as for the company abrasive segment and the engineering materials segment there are a few cases in our corpus in which coreferent descriptions have totally different premodification from their antecedents as in the pixielike clarinetist the softspoken clarinetistthese cases would be hard even for a system using real commonsense reasoning since often the information in the premodifier is new we consider these examples one of the best arguments for including in the system a focustracking mechanism along the lines of sidner our heuristic matching algorithm also suggests wrong antecedents in cases like the rules in when the last mention refers to a modified concept currently the rules force executivesthe rule changes would the rules will eliminate finally the matching algorithm gets the wrong result in cases such as the population the voting population where the new information indicates a subset superset or part of a previously mentioned referent basis of the tests just discussed the heuristics that achieve the best results for anaphoric definite descriptions are in table 10 we present the overall results on anaphora classification and anaphora resolution for the version of the system that does not attempt to resolve bridging descriptions for both training data and test datathe reason there are different figures for anaphora resolution and classification is that the system may correctly classify a description as anaphoric but then find the wrong antecedentwe used this set of heuristics when evaluating the heuristics for discoursenew and bridging descriptions in the rest of the paperthe column headed represents the number of cases of descriptions classified as anaphora in the standard annotation indicates the total number of anaphora correctly identified indicates the total number of errors525 errors in anaphora resolutionbefore discussing the results of the other heuristics used by the system we will discuss in more detail some of the errors in the resolution of anaphoric descriptions made by using the heuristics just discussedsome errors are simply caused by misspellings in the treebank as in the example below where the antecedent is misspelled as spokewomanthe most common problems are due to the heuristics limiting the search for antecedentsin both sentence 7 and sentence 30 are outside the window considered by the system when trying to resolve the adjusters in 53 7she has been on the move almost incessantly since last thursday when an army of adjusters employed by major insurers invaded the san francisco area30aetna which has nearly 3000 adjusters had deployed about 750 of them 53many of the adjusters employed by aetna and other insurers limiting the type of potential antecedents to indefinites definite descriptions and possessives while improving precision also leads to problems because the antecedents introduced by other nps such as proper names are missedeg toni johnson in the following definite description is then classified by the system as larger situationunfamiliarsome of these problems are corrected in version 2 of the system which also attempts to handle bridging descriptions and therefore uses algorithms for assigning a type to such entitiesthe petite 29yearold ms johnson the premodification heuristics prevent the system from finding the right antecedent in the cases of coreferent descriptions with different premodifiers as in the victorian house that ms johnson is inspecting has been deemed unsafe by town officialsonce inside she spends nearly four hours measuring and diagramming each room in the 80yearold housein the following example it is the lack of a proper treatment of postmodification that causes the problemthe system classifies the description the earthquakerelated claims as anaphoric to claims from that storm but it is discoursenew according to the standardized annotationaetna which has nearly 3000 adjusters had deployed about 750 of them in charlotte columbia and charlestonadjusters who had been working on the east coast say the insurer will still be processing claims from that storm through decemberit could take six to nine months to handle the earthquakerelated claimsin the system correctly classifies the definite description the law as anaphoric but suggests as antecedent an income tax law whereas a majority of our annotators indicated a money lending law as the antecedentquot nearly 20 years ago mr morishita founder and chairman of aichi corp a finance company received a 10month suspended sentence from a tokyo court for violating a moneylending law and an income tax lawhe was convicted of charging interest rates much higher than what the law permitted and attempting to evade income taxes by using a double accounting systemfinally the system is incapable of resolving plural references to collections of objects introduced by singular nps even when these collections were introduced by coordinated noun phrasesalthough it would be relatively easy to add rules for handling the simplest cases many of these references can only be resolved by means of nontrivial operations the owners william and margie hammack are luckier than many othersthe hammocks the overall recall and precision results for the heuristics for identifying discoursenew descriptions presented in section 42 are shown in table 11in this table we do not distinguish between the two types of discoursenew descriptions unfamiliar and largersituation as already mentioned in section 42 distinguishing between the two types of discoursenew descriptions identified by hawkins prince and others is not easy even for humans and indeed our heuristics for recognizing discoursenew descriptions work better when evaluated togetherthe column headed if represents the number of cases of descriptions classified as discoursenew in the standard annotation indicates the total number of discoursenew descriptions correctly identified the number of errorsthese results are for the version of the system that uses the best version of the heuristics for dealing with anaphoric descriptions discussed above and that does not attempt to resolve bridging descriptions the performance of the specific heuristics discussed in section 42 is shown in tables 12 to 15table 12 shows the results of the heuristics for larger situation uses on the training data whereas table 13 reports the performance on the same data of 28 the law could also be interpreted as referring to quotthe law system in generalquot in which case none of the antecedents would be correct the heuristics for unfamiliar useswe report only precision figures because our standard annotation only gives us information about the classification of these discourse descriptions as discoursenew not about the reason they were classified in a certain way the most common feature of discoursenew descriptions is postmodification the least satisfactory results are those for proper names in premodificationas expected the heuristics for recognizing unfamiliar uses achieve better precision than those for larger situation uses which depend more on commonsense knowledgetables 14 and 15 summarize the results of the heuristics for discoursenew descriptions on the test data again the best results were obtained by the heuristics for recognizing unfamiliar usesthe biggest difference in performance was shown by the heuristic checking the presence of the definite in a copula construction which performed very well on the training data but poorly on the test datathe actual performance of that heuristic is difficult to evaluate however as a very low recall was reported for both training and test datain the following sections we analyze some of the problems encountered by the version of the system using these heuristicsappositioncoordinated nps with more than two conjuncts are a problem for this heuristic since in the penn treebank l coordinated nps have a structure that matches the pattern used by the system for recognizing appositionsfor example the coordinated np in the sentence c7 consists of the yous japan britain west germany canada france and italy has the structure in copulathis heuristic was difficult to evaluate because there few examples and the precision in the two data sets is very different one problem is that the descriptions in copula constructions might also be interpreted as bridging descriptionsfor instance the description the result in below is the result of something mentioned previously while the copula construction specifies its referentother ambiguous examples are and athe result is that those rich enough to own any real estate at all have boosted their holdings substantially bthe chief culprits he says are big companies and business groups that buy huge amounts of land not for their corporate use but for resale at huge profit c the key man seems to be the campaign manager mr lynchrestrictive premodificationone problem with this heuristic is that although proper nouns in premodifier positions are often used with discoursenew definites they may also be used as additional information in associative or anaphoric uses others grab books records photo albums sofas and chairs working frantically in the fear that an aftershock will jolt the house againas ms johnson stands outside the hammack house after winding up her chores there the house begins to creak and swayrestrictive postmodificationif the system fails to find an antecedent or anchor and the description is postmodified it may wrongly be classified as discoursenewin the filing on the details of the spinoff was classified as bridging on documents filed by the coders but the system classified it as discoursenew spinoff disclosed that cray research inc will withdraw the almost 100 million in financing it is providing the new firm if mr cray leaves or if the productdesign project he heads is scrappedthe filing on the details of the spinoff caused cray research stock to jump 2875 yesterday to close at 38 in new york stock exchange composite tradingproper nounsas we have already seen repeated below as a definite description that looks like a proper noun may in fact be anaphoricthis is not always a problem as the system does attempt to find antecedents for these definites as well but if the antecedent is not found the description is incorrectly classified as discoursenewthe petite 29yearold ms johnson special predicatesin this example the system classified as discoursenew a time reference which is classified as bridging in the standard annotation newsweek circulation for the first six months of 1989 was 3288453 flat from the same period last yearyous news circulation in the same time was 2303328 down 26as mentioned in section 2 our corpus annotation experiments showed bridging descriptions to be the most difficult class for humans to agree oneven when our annotators agreed that a particular expression was a bridging description different anchors would be available in the text for the interpretation of that bridging descriptionthis makes the results of the system for this class very difficult to evaluate furthermore the results must be evaluated by handwe first tested the heuristics individually on the training data by adding them to version 1 of the system one at a timethese separate tests were manually evaluated we then integrated all of these heuristics into a version of the system called version 2 using both automatic and manual evaluation_ in this section we discuss only the results of the individual heuristics the overall results of version 2 are discussed in section 6bridging descriptions are much more sensitive than other types of definite descriptions to the local focus for this reason version 2 uses a different search strategy for bridging descriptions than for other definite descriptionsrather than considering all definite descriptions in the current window simultaneously it goes back one sentence at a time and stops as soon as a relation with a potential anchor is found five sentencesthe results of this search over our training corpus in which 204 descriptions were classified as bridging are shown in table 16it is interesting to note that the semantic relations found in this automatic search were not always those observed in our manual analysisthe main reason the figures are so low is that the existence of a semantic relation in wordnet is not a sufficient condition to establish a link between an antecedent and a bridging descriptionin only about a third of the cases was a potential antecedent for which we could find a semantic relation in wordnet an appropriate anchoran example is although there is a semantic relation between argument and information in wordnet the description the argument is related to the vp contend rather than to the np informationsome form of focusing seems to play a crucial role in restricting the range of antecedents a sec proposal to ease reporting requirements for some company executives would undermine the usefulness of information on insider trades as a stockpicking tool individual investors and professional money managers contendthey make the argument in letters to the agency about rule changes proposed this past summer that among other things would exempt many middlemanagement executives from reporting trades in their own companies sharessense ambiguity is responsible for some of the false positivesfor instance the noun company has at least two distinct senses quotvisitorquot and quotbusinessquot a relation of hypernyrny was found between company and human whereas in the text the noun company was used in the quotbusinessquot sensea more important problem however is the incompleteness of the information encoded in wordnetto have an idea of how complete the information in wordnet is concerning the relations that are encoded we selected from our two corpora 70 bridging descriptions that we had manually identified as being linked to their anchors by one of the semantic relations encoded in wordnetsynonymy hypernymy and meronymy in table 17 we show the percentages of such relations actually encoded in wordnetas we can see from the table the recall figure was quite disappointing especially for synonymy relationsin some cases the problem was simply that some of the an example of problematic organization in wordnet words we looked for were not in wordnet examples include newsweekly crocidolite countersuit other times the word we looked for was contained in wordnet but not in the same typographic format as it was presented in the text for example we had spinoff in a text whereas wordnet had only an entry for spinoffa second source of problems was the use in the wsj articles of domainspecific terminology with contextdependent senses such as slump crash and bust which in articles about the economy are all synonymsfinally in other cases the relations were missing due to the structure of wordnet for instance in wordnet the nouns room wall and floor are encoded as part of building but not of house in summary our tests have shown that the knowledge encoded in wordnet is not sufficient to interpret all semantic relations between a bridging description and its antecedent found in the kind of text we are dealing with only 46 of the relations observed were encoded in wordnetthe possibility of using domainspecific automatically acquired lexical information for this purpose is being explored see for example poesio schulte i am walde and brew in addition we found that just looking for the closest semantic relative is not enough to find anchors for bridging descriptions this search has to be constrained by some type of focusing mechanism identifying named entity types is a prerequisite for resolving descriptions based on namesthe simple heuristics discussed in section 54 identified entity types for 66 of all names in the corpus precision was 95the errors we found were sometimes due to name or sense ambiguityin the same text a name may refer both to a person and a company as in cray computers corp and seymour craywhen looking in wordnet for a type for the name steve reich we found for the name reich the type countrythese problems have also been noted by the authors of systems participating in muc6 we also found undesirable relations such as hyperrtymy for person and companywe had 25 definite descriptions manually identified as based on compound nounsfor these 25 cases our implemented heuristics achieved a recall of 36 but in some cases found valid relations other than the ones we identifiedthe low recall was sometimes due to segmentationsometimes the spelling of the premodification was slightly different from the one of the description as in a 15acre plot the 15 acresas mentioned above we implemented two versions of the systemversion 1 only resolves direct anaphora and identifies discoursenew descriptions version 2 also deals with bridging descriptionsboth versions of the system have at their core a decision tree in which the heuristics discussed in the previous sections are tried in a fixed order to classify a certain definite description and find its anchordetermining the optimal order of application of the heuristics in the decision tree is crucial to the performance of the systemin both versions of the system we used a decision tree developed by hand on the basis of extensive evaluation we also attempted to determine the order of application automatically by means of decision tree learning algorithms in this section we first present the handcrafted decision tree and the results obtained using this decision tree for version 1 and version 2 we then present the results concerning the agreement between system and annotators and we briefly discuss the results obtained using the decision tree acquired automaticallythe handcrafted order of the heuristics in both versions is the followingfor each np of the input 2the nps that may serve as potential antecedents are made available for description resolution by means of the optimal selection criterion discussed in section 4130 by comparison the systems participating in muc6 had a recall for the named entity task ranging from 82 to 96 and precision from 89 to 97 but used comprehensive lists of cue words or consulted dictionaries of namesthe system from sheffield for instance used a list of 2600 names of organizations 94 company designators 160 titles about 500 human names from the oxford advanced learner dictionary 2268 place names and other trigger words for locations government institutions and organizations in muc7 the best combined precision score 9339 was achieved by the system from ltg in edinburgh which does not use such knowledge sourceswe used this system in a version of our prototype that only attempts to resolve bridging descriptions 3if the np is a definite description the system applies to it the following teststhe first test passed by the definite determines its classification and after that the next np is processed compound nouns wordnet lookup if one of the three tests above succeeds the description is classified as bridging and the association between description and anchor indexes is assertedthe decision tree encoded by this algorithm is shown in figure 3note that before trying to find an antecedent the system executes a few tests for identifying discoursenew descriptions in other words the strategy adopted is addition definite descriptions that matched these patterns produced errors in anaphora resolution which were eliminated by processing them firsthanddesigned decision tree for version 1 and version 2 only then try to interpret the definite description as a bridge the heuristics for recognizing bridging descriptions are only applied when the other heuristics failthis is because the performance of these heuristics is very poor and also because some of the heuristics that deal with bridging descriptions are computationally expensive the idea was to eliminate those cases less likely to be bridging before applying these heuristicsthe system does not classify all occurrences of definite descriptions when none of the tests succeeds the definite description is not classifiedwe observed in our first tests that definite descriptions not resolved as direct anaphora and not identified as discoursenew by our heuristics were mostly classified in the standardized annotation as bridging descriptions or discoursenewexamples of discoursenew descriptions not identified by our heuristics are larger situation uses such as the world the nation the government the economy the marketplace the spring the other hand the spot and the 1920s or discoursenew nps with restrictive premodification such as the low 40 range the defense capital good sector the residential construction industry the developing world and the worldwide supercomputer marketwe present below the overall results of the version of our system dealing with direct anaphora and discoursenew descriptions only training datathe output of the optimal configuration of version 11 for the training data is shown in figure 4a total of 20 texts were processed containing 68311 npsalmost half of these nps were considered as potential antecedents 1040 descriptions were processed by the systeman antecedent was identified for 270 of them for 212 out of the 270 definite descriptions classified as anaphoric samehead by the system the antecedent was a definite nfaccording to the annotation of one of our coders the 312 anaphoric descriptions were grouped in 164 coreference chains and 86 of these chains were initiated by definite descriptionsin figure 5 the results reported by the system are compared with the standard annotationthe figure also shows how descriptions which were not resolved by the system were classified in the standard annotationmost of the descriptions not classified by the system were bridging descriptionsthe overall precision and recall results of version 1 of the system are shown in table 18note that because a large number of definite descriptions are not classified the overall recall is only 59 even though the recall for both anaphoric and discoursenew descriptions is much highertest datanext the system was evaluated using the test data corpus 2 which had not been used to develop the heuristicsthe results are shown in figures 6 and 7summary of the results of version 1 on training datathe recall and precision figures for the system performance over the test data are presented in table 19this corpus consisted of 14 texts containing 2990 npsagain ahnost half of the nps were considered as potential antecedentsthe system processed 464 definite descriptions of these the system could classify 324 115 as direct anaphora 209 as discoursenewof the antecedents 88 were defirtites themselvesthe system incorrectly resolved 77 definite descriptions 19 anaphoric definites and 58 discoursenewas before there were just a few more errors in anaphora resolution than in anaphora classificationthe overall recall for the test data was 53 precision was 76 one difference between the results on the two data sets is the distribution into classes of those descriptions that the system fails to classifyin the first corpus the largest number of cases not classified are bridging descriptionsby contrast the largest number of cases not classified by the system in corpus 2 are discoursenewsummary of the results for test dataas discussed in section 54 the results of the heuristics for bridging descriptions presented in section 43 were not very goodwe nevertheless included these heuristics in version 2 of the system which as discussed above applied them to those descriptions that failed to be recognized as direct anaphora or discoursenewthe heuristics were applied in the following ordertraining datathe manual evaluation of the results of version 2 on the training data is presented in table 20the table lists the number of acceptable anchors and the number of false positives found by each heuristicnote that the system sometimes finds anchors that are not those identified manually but are nevertheless acceptablewe found fewer bridging relations than the number we observed in the corpus analysis furthermore the number of false positives produced by such heuristics is almost twice the number of right answerstest dataversion 2 was tested over the test data using automatic evaluationie the system was only evaluated as a classifier and the anchors found were not analyzeda total of 57 bridging relations were found but only 19 of the definite descriptions classified as bridges by the system had been classified as bridging descriptions in the standard annotationcompared to version 1 of the system which does not resolve bridging descriptions version 2 has higher recall but lower precision as shown in table 21as a second form of evaluation of the performance of the system we measured its agreement with the annotators on the test data using the k statisticversion 1 of the system finds a classification for 318 out of 464 definite descriptions in corpus 2 if all the definite descriptions that the system cannot classify are treated as discoursenew the agreement between the system and the three subjects that annotated this corpus on the two classes firstmention and subsequentmention is k 07 this should be compared with an agreement of k 077 between the three annotators themselvesif instead of counting these definite descriptions as discoursenew we simply do not include them in our measure of agreement then the agreement between the system and the annotators is k 078 as opposed to k 081 between the annotatorsversion 2 finds a classification for 355 out of 464 definite descriptions however its agreement figures are worseif we count the cases that the system cannot classify as discoursenew the agreement between the system and the three annotators for the three classes is k 057 if we count them as bridges k 063 if we just discard those cases k 063 againas mentioned above the cases that the system cannot handle are mainly discoursenew descriptions 651 inducing a decision treethe decision tree discussed in section 61 was derived manually by trial and errorwe also tried to derive the order of application of the heuristics automaticallyto do this we used a modified version of the system to assign boolean feature values to each definite description in the training corpus the following features were used this list of features together with the classification assigned to each description in the standard annotation was used to train an implementation of quinlan learning algorithm id3 we excluded the verification of restrictive premodification and copula constructions since these parameters had given the poorest results before an example of the samples used to train id3 is shown in specpred dirana appos propn rpostm dduse ro no no yes no 3 no no ro ro yes 3 no no no no no 2 no no no no no 2 no no no no no 1 no yes no no no 1 the algorithm generates a decision tree on the basis of the samples giventhe resulting decision tree is presented in figure 8the main difference between this algorithm and the algorithm we arrived at by hand is that the first feature checked by the decision tree generated by 1d3 is the presence of an antecedent with a samehead nounthe presence of special predicates which we adopted as the first test in our decision tree is only the fourth test in the tree in figure 8generated decision tree the learned decision tree was compared with that of the algorithm we arrived at by trial and error as follows the first 14 texts of corpus 1 were used as training data to generate the decision treewe then tested the learned algorithm over the other 6 texts of that corpus two different tests were undertaken and discoursenew descriptions all cases classified as bridging idiom or doubt in the standard annotation were not given as input in the learning processthis algorithm was then only able to classify descriptions as one of those two classesthe resulting decision tree classifies descriptions with a samehead antecedent as anaphoric all the rest as discoursenewhere we present the results evaluated all together considering the system as a classifier only ie without considering the tasks of anaphora resolution and of identification of discoursenew descriptions separatelythe output produced by the learned algorithm is compared to the standard annotationsince the learned algorithm classifies all cases the number of responses is equal to the number of cases as a consequence recall is the same as precision and so is the f measurethe tests over 6 texts with 195 definite descriptions gave the following results the best results were achieved by the algorithm trained for two classes onlythis is not surprising especially considering how difficult it was for our subjects to distinguish between discoursenew and bridging descriptionsthe handcrafted decision tree achieved 62 recall and 85 precision on those same texts ie a higher precision but a lower f measure due to a lower recall sinceunlike the learned algorithmit does not classify all instances of definite descriptionsif however we take the class discoursenew as a default for all cases of definite descriptions not resolved by the system recall precision and f value go to 77 slightly higher than the rates achieved by the decision tree produced by id3as the learned decision tree has the search for a samehead antecedent as the first test we modified our algorithm to work in the same way and tested it again with the two corporathe results with this configuration were in other words the results were about the same although a slightly better performance was obtained when the tests to identify discoursenew descriptions were tried firsta major difference between our proposal and almost all others is that we concentrate on definite descriptions most of the systems we discuss below attempt to resolve all types of anaphoric expressions often concentrating on pronounsfocusing on definite descriptions allowed us to investigate what types of lexical knowledge and commonsense inference are actually used in natural language comprehensionfrom an architectural standpoint the main difference between our work and other proposals in the literature is that we paid considerably more attention to the problem of identifying discoursenew definite descriptionsprevious work on computational methods for definite description resolution can be divided in two camps proposals that rely on commonsense reasoning and systems that can be quantitatively evaluated such as those competing on the coreference task in the sixth and seventh message understanding conference we discuss these two types of work in turnthe crucial characteristic of these proposals is that they exploit handcoded commonsense knowledge and cannot therefore be tested on just any arbitrary textsome of them are simply tested on texts that were especially built for the purpose of testing the system systems like the core language engine are more robust but they have to be applied to a domain restricted enough that all relevant knowledge can be encoded by handvieira and poesio processing definite descriptions sidner theory of definite anaphora cornprehensionin her dissertation sidner proposed a complete theory of definite np resolution including detailed algorithms for resolving pronouns anaphoric definite descriptions and bridging descriptionsshe also proposed methods for resolving larger situation uses the one class her methods do not handle are those definite descriptions that following hawkins we have called unfamiliar usesthe main contribution of sidner dissertation is her theory of focus and its role in resolving definite nps to this day her focustracking algorithms are arguably the most detailed account of the phenomenonthe main problem with sidner work from our perspective is that her algorithms rely heavily on the availability of a semantic network and causal reasoner furthermore some of the inference mechanisms are left relatively underspecified lexical and commonsense knowledge play three important roles in sidner system they are used to track focus to resolve bridging descriptions and larger situation uses and to evaluate interpretive hypotheses discarding those that seem implausibleonly recently have robust knowledgebased methods for some of these tasks begun to appear and their performance is still not very good as seen above in our discussion of using wordnet as a semantic network33 as for checking the plausibility of a hypothesis on the basis of causal knowledge about the world we now have a much better theoretical grasp of how such inferences could be made but we are still quite a long way from a general inference enginewe also found that some of sidner resolution rules are too restrictivefor example her cospecification rule 1 prescribes that definite description and focus must have the same head and no new information can be introduced by the definite but this rule is violated fairly frequently in our corpusthis criticism is not new in 1983 it was already recognized that an anaphoric full noun phrase may include some new and unshared information about a previously mentioned entity and carter weakened some of the restrictions proposed by sidner in his systemcarter shallow processing anaphor resolvercarter implemented a modified version of sidner algorithm and integrated it with an implemented version of wilks theory of commonsense reasoningthis work is interesting for two reasons first of all because carter unlike sidner attempted to evaluate the performance of his system and because in doing so he addressed the commonsense reasoning problem in some detailcarter system spar is based on the shallow processing hypothesis that in resolving anaphors reasoning should be avoided as much as possiblethis is of course the same approach taken in our own work which could be seen as pushing carter approach to the extremethe difference is that when it becomes necessary spar does use iwo commonsense knowledge sources a semantic network based on alshawi theory of memory for text interpretation and a causal reasoner based on wilks work in both cases the necessary information was encoded by handcarter system was tested over short stories specifically designed for the testing of the system about 40 written by carter himself and 23 written by othersthese latter contain about 80 definite descriptionsspar correctly resolved all anaphors in the stories written by carter and 66 out of 80 of the descriptions in the 23 other storiesthe core language enginethe core language engine is a domainindependent system developed at sri cambridge which translates english sentences into formal representationsthe system was used by sri for a variety of applications including spoken language translation and airline reservationsthe cle makes use of a core lexicon and uses an abductive commonsense reasoner to produce an interpretation and to verify the plausibility of choice of referents from an ordered list the required world knowledge has to be added by hand for each domain together with whatever lexical knowledge is neededthe construction of the formal representation goes through art intermediate stage called quasilogical form the qlf may contain unresolved terms corresponding to anaphoric nps including among others definite descriptionsthe resolution process that transforms qlfs into resolved logical form representations of sentences is described in alshawi definite descriptions are represented as quantified termsthe referential readings of definite descriptions are handled by proposing referents from the external application context as well as the cle context model attributive readings may also be proposed during qlf resolution some of these seem to correspond to our unfamiliar usesthus the cle seems to account for discoursenew descriptions although they are not explicitly mentioned and the methods used for choosing a referential or an attributive interpretation are not discussedto our knowledge no analysis of the performance of the system has been publishedthe seven systems that participated in the muc6 competition can all be quantitatively evaluated they achieved recall scores ranging from 3569 to 6278 and precision scores ranging from 4423 to 7188 on nominal coreferenceit is important to note that the evaluation in muc6 differed from ours in three important aspectsfirst of all these systems have to parse the texts which often introduces errors furthermore these systems often cannot get complete parses for the sentences they are processingsecondly the evaluation in muc6 considers the coreferential chain as a whole and not only one correct antecedentthe third difference is that these systems process a wider range of referring expressions including pronouns and bare nouns while our system only processes definite npson the other hand not all definite descriptions are marked in the ml1c6 coreference task these systems are only required to identify identity relations and only if the antecedent was introduced by a noun phrase this leaves out discoursenew descriptions and especially bridging descriptions which as we have seen are by far the most difficult caseskameyarna analyzes in detail the coreference module of the sri system that participated in muc6 this system achieved one of the top scores for the coreference task a recall of 59 and a precision of 72the sri system uses a sort hierarchy claimed to be sparse and incompletefor definite descriptions kameyama reports the results of a test on five articles containing 61 definite descriptions in total recall was 46 and for proper names 69 the precision figures for these two subclasses are not reportedsome of the errors in definite descriptions are said to be due to nonidentity referential relations however there is no mention of differences between discoursenew and bridging descriptionsother errors were said to be related to failure in recognizing synonymsaone and bennet propose an automatically trainable anaphora resolution systemthey train a decision tree using the c 45 algorithm by feeding feature vectors for pairs of anaphor and antecedentthey use 66 features including lexical syntactic semantic and positional featurestheir overall recall and precision figures are 6656 and 7218considering only definite nps whose referent is an organization recall is 3519 and precision 50 their training and test texts were newspaper articles about joint ventures and they claim that because each article always talked about more than one organization finding the antecedents of organizational anaphora was not straightforwardin burger and connolly a bayesian network is used to resolve anaphora by probabilistically combining linguistic evidencetheir sources of evidence are ccommand semantic agreement discourse focus discourse structure recency and centeringtheir methods are described and exemplified but not evaluateda bayesian framework is also proposed by cho and maida for the identification of definite descriptions referents
J00-4003
an empirically based system for processing definite descriptionswe present an implemented system for processing definite descriptions in arbitrary domainsthe design of the system is based on the results of a corpus analysis previously reported which highlighted the prevalence of discoursenew descriptions in newspaper corporathe annotated corpus was used to extensively evaluate the proposed techniques for matching definite descriptions with their antecedents discourse segmentation recognizing discoursenew descriptions and suggesting anchors for bridging descriptionsa major obstacle in the resolution of definite noun phrases with full lexical heads is that only a small proportion of them is actually anaphoric in our system wordnet is consulted to obtain the synonymy hypernymy and meronymy relations for resolving the definite anaphorawe classify each definite description as either direct anaphora discoursenew or bridging descriptionwe distinguish restrictive from nonrestrictive post modification by ommitting modifiers that occur between commas which should not be classified as chain startingfor the discoursenew classification task the model most important feature is whether the head word of the np to be classified has occurred previously
on coreferring coreference in muc and related annotation schemes paper it is argued that quotcoreferencequot annotations as performed in the muc community for example go well beyond annotation of the relation of coreference proper as a result it is not always clear what semantic relation these annotations are encoding the paper discusses a number of problems with these annotations and concludes that rethinking of the coreference task is needed before the task is expanded in particular it suggests a division of labor whereby annotation of the coreference relation proper is separated from other tasks such as annotation of bound anaphora and of the relation between a subject and a predicative np in this paper it is argued that quotcoreferencequot annotations as performed in the muc community for example go well beyond annotation of the relation of coreference properas a result it is not always clear what semantic relation these annotations are encodingthe paper discusses a number of problems with these annotations and concludes that rethinking of the coreference task is needed before the task is expandedin particular it suggests a division of labor whereby annotation of the coreference relation proper is separated from other tasks such as annotation of bound anaphora and of the relation between a subject and a predicative npvarious practical tasks requiring language technology including for example information extraction and text summarization can be performed more reliably if it is possible to automatically find parts of the text containing information about a given topicfor example if a text summarizer has to select the most important information in a given text about the 1984 wall street crash then the summarization task is greatly helped if a program can automatically spot all the clauses in the text that contain information about this crashto evaluate a program of this kind extensive language corpora have been prepared in which human readers have annotated what has been called the coreference relationthese annotated corpora are then used as a gold standard against which the program achievements can be comparedthe relation of coreference has been defined as holding between two noun phrases if they quotrefer to the same entityquot more precisely let us assume that ai and a2 are occurrences of noun phrases and let us assume that both have a unique referent in the context in which they occur under these assumptions we can use a functional notation egreferent as short for quotthe entity referred to by aquot and define al and a2 corefer if and only if referent referentputting it simply to determine whether ai and a2 corefer first determine referent and referent then see if they are equalideally of course one would like to annotate many other semantic relations that hold between parts of a text because they are also relevant for text interpretationone candidate is the relation of anaphoraloosely speakingand glossing over some difficulties regarding the precise delimitation of anaphora an np al is said to take an np a2 as its anaphoric antecedent if and only if al depends on az for its interpretation it follows that anaphora and coreference are different thingscoreference for example is an equivalence relation anaphora by contrast is irreflexive nonsymmetrical and nontransitivesecondly anaphora as it has just been defined implies contextsensitivity of interpretation and this is not true for coreferencefor example a name and a description can corefer without either of the two depending on the other for its interpretationanaphoric and coreferential relations can coincide of course but not all coreferential relations are anaphoric nor are all anaphoric relations coreferentialcoreference annotation has been a focus of the sixth and seventh message understanding conferences and various other annotation exercises in this squib we intend to point at some fundamental problems with many of these annotation exercises which are caused by a failure to distinguish properly between coreference anaphora and other related phenomenabecause the muc project is the bestknown example of coreference annotation on which much subsequent work is based and because of the public availability of the muc task definition or before it extends the relation of coreference to cover wholepart and classinstance relations in this section some unclarities and inconsistencies will be discussed that we found in the literature on coreference annotation and which appear to stem from confusion about what reference and coreference arein section 21 we will explore the tendency to apply coreference annotation to nonreferring nps and bound anaphora and we will argue that this tendency is problematicin section 22 we will argue that existing annotation enterprises still fail to respond properly to the wellknown problem of how to annotate nps that are used intensionallyin section 23 we turn to a suggestion for the improvement of the actual process of annotation that has been made in the van deernter and kibble on coreferring literature namely to separate the task of determining the quotmarkablesquot from that of establishing coreference relations between them showing that this separation is hard to maintainat the end of each subsection some suggestions will be made on how the problems may be tackledthese suggestions will be elaborated in section 3the notion of reference is common to a broad variety of semantic theories when speakerswriters use an np to refer to an object or a set of objects they try to single out the entity uniquelythus when someone utters the ne the tenant of the house the speaker may aim to single out a unique person say mr xeven when this is the case the notion of referring has its problemsfor example the speaker may be mistaken in her belief that mr x is the tenant of the house in such cases it is unclear who is being referred tosuch problems notwithstanding work on coreference annotation has usually taken the notion of reference for granted on the assumption that clear cases where the referent of an np is clearly defined outnumber the problematic ones at least in some important types of discourselet us for now buy into the assumption that reference is a straightforward notionthen following bach for example one thing that is clear about reference is that some nps do not referwhen someone says the solution nps do not refer to any single solution nor to any definite set of solutionsmost theorists would agree that they do not have a referentnonreferring nps can enter anaphoric relations anaphoric antecedent to it in but if they do not refer the coreference relation as defined in section 1 and referent are defined is not applicable to themeven so the muc td asks annotators to treat them as if it was applicableit acknowledges that quotone may argue that a bound anaphor and its antecedent are not coreferential in the usual sensequot but falls short of explaining explicitly what types of anaphora are to be annotated and how 2 the annotation of bound anaphora merits some further elaborationconsider for example quantifying nps such as every tv network if every tv network refers at all then presumably it refers to the set of all tv networks the td however asks annotators to let every tv network corefer with its in according to the definition of coreference this means that referent referent so that referent is the set of all tv networks predicting incorrectly that means or npi is an anaphoric antecedent of np2 or np2 is an anaphoric antecedent of np1 note that r is not an equivalence relationthe subject of for example can corefer with a plural pronoun in the next sentence eg they are now required to do this but they and it do not stand in the relation r predicative nps are another category of nps whose referentiality is problematic and yet the muc td instructs annotators to let them corefer with other npsin and for example the predicative np thea president of dd cannot be replaced by the proper name higgins without changing the meaning of the sentence beyond recognition indicating that the relation between the two nps must be something other than coreference we will have more to say about predicative nps in the next sectionto sum up muc annotators have been instructed to let nps of all major classes quotcoreferquot liberally with other nps even when it is far from clear that the nps in question have been used referentiallyas a result the relation actually annotated in muchenceforth called the ident relation following hirschman and chinchor must be distinguished from the coreference relationthe td admits that certain instructions may be incompatible with the definition of coreference but no reason is given for these incompatibilities and no intuitive motivation for the relation ident is offeredas a result the annotator is left with a long series of instructions that fail to be held together by a common rationaleremedygo back to basics start from a definition of coreference and write a td that implements the definitionwe suggest that it is not until this has been done successfully that extensions into the area of bound anaphora become a risk worth takingproblems posed to coreference annotation by intensionality have motivated considerable complications in the tdconsider section 64 which discusses the implications of quotchange over timequot the td says that quottwo rnarkables should be recorded as coreferential if the text asserts them to be coreferential at any timequot thus for example the td points out that in a case like annotators are expected to mark henry higgins sales director of sudsy soaps and president of dreamy detergents as coreferentialbut since coreference is generally agreed to be a equivalence relation this implies that the sales director of sudsy soaps and the president of dreamy detergents are the same personclearly this cannot be rightluckily there are other parts of the same td that do a better job of applying the notion of coreference to sentences involving change over timeconsider for example section 13 where the sentence the stock price fell from 402 to 385 is discussedhere annotators are asked to consider the stock price as standing in the ident relation with 385 but not with 402 because 385 is quotthe more recent valuequot this solution however is still problematicwhat for instance if the price continues to falldoes the annotator have to go back to deciding that 382 is an even more recent value and the stock price does not stand in the ident relation with 385 after allremedyat least three different strategies are conceivableperhaps most obviously one might decide that coreference between a functional description like those in or and an np denoting a value requires this value to be the present value of the functionbut the text does not always say what the present value ismoreover functional descriptions do not always pertain to the presentin last year the president resigned for example the subject refers to last year president and consequently it does not corefer with nps referring to the present presidenta second strategy consistent with dowty wall and peters might be to say that the stock price refers only to a montaguetype individual concept that is a function from times to numbersit would follow that the stock price does not corefer with either 402 or 385 and no problem would ariseanalogously president of dreamy detergents in above where it is used predicatively might denote an individual concept rather than an individualif the next sentence goes on to say he died within a week then he is coreferential with henry higgins if instead the text proceeds this is an influential position but the pay is lousy then this is coreferential with president of dreamy detergentsif both these analyses prove to be too complex to be used in largescale annotation exercises one might have to take the point of view that such descriptions simply do not referthis would amount to a third strategy which excludes these descriptions from entering coreference relations altogether and leaving their analysis to the other tasksit has been proposed that annotation can profitably be broken down into two more manageable steps annotation of markables is to be carried out before partitioning the set of markables into equivalence classes of coreferring elements it turns out however that a strict distinction between the two steps is difficult to maintain because in principle almost anything is markablein the muc7 td this is sensibly acknowledged by letting annotators mark up certain elements only if they corefer with an existing markable these include conjuncts and prenominal modifiersin the following example the first occurrence of aluminum is only considered to be markable because it corefers with the occurrence of this noun as a bare np in the second clausein other words coreference helps to determine what the markables are finding all the nps that might participate in coreference becomes even harder if the annotation scheme is extended to cover event coreference since it is often extremely difficult to determine which events can serve as antecedents examples of this kind suggest that one markable can give rise to another a complication of a similarly algebraic flavor arises if quotdiscontinuous elements including conjoined elementsquot are covered as when a plural pronoun corefers with a combination of previously occurring nps note especially that annotators would have to be on guard for the possibility of different combinations of markables coreferring to each othera corpus for example can easily contain nps a b c and d for which referent you referent referent you referenteven assuming that each of abc and d has been properly identified as a markable during step 1 this is little guarantee that annotators of step 2 will realize the complex coreference relation between the combination of a and b and that of c and d the number of possible combinations of markables will often be too large to handleremedyone alternative is to have a first pass where only referring expressions that look like anaphors are marked up such as pronouns and definite npssubsequent passes would look for antecedents for these expressions and link coreferring elementsan intermediate approach would be to mark up a core set of referring expressions on the first pass allowing for further referring expressions to be identified on subsequent passes if this is necessary to resolve coreferencethe extent to which each strategy would contribute to accuracy and speed of annotation remains to be determinedcurrent quotcoreferencequot annotation practice as exemplified by muc has overextended itself mixing elements of genuine coreference with elements of anaphora and predication in unclear and sometimes contradictory waysas a result the annotated corpus emerging from muc is unlikely to be as useful for the computational linguistics research community as one might hope the more so because generalization to other domains is bound to make problems worsein many domains for example other sources of intensionality than change over time occur prominentlyan example is epistemic modality van deemter and kibble on coreferring the relation between henry higgins and the man you have talked to is analogous to that between henry higgins and sales director of sudsy soaps in with possible worlds taking the place of points in time the two nps refer to the same individual in some possible worlds only modality of course interacts with tense leading to further complicationsthe muc td has addressed many of the difficult problems in the area of reference and coreference but if its success is judged by the criteria in hirschman and chinchor the results are mixed at bestcriterion 4 has been discussed aboveconcerning criterion 3 it appears doubtful that the present task definition can be applied quotquickly and cheaplyquot hirschman et al when discussing this issue note that interannotator agreement at the time of writing was in the low eightiesthis figure which falls markedly short of the 95 required by criterion 2 does not seem to have improved substantially since concerning criterion 1 finally it has been observed that the figures for recall in the muc information extraction algorithm are rather discouraging the material in section 2 suggests that this relative lack of success is no accident and that unclarities in the td are to blamerepairs are not always easy to findgiven this situation we suggest that a rethinking of the coreference task is requiredfirstly one needs a consistent story of what reference and coreference are taken to betheoretical work on reference does not show a consensus on some crucial questions in this area different answers have been suggested each with its own advantages and disadvantagesfor example one might identify the notion of a referring np with that of a semantically definite np in the sense of barwise and cooper 3 this would include proper names extensional definite descriptions universally quantified nps and specifically used indefinites but it would exclude nonspecifically used indefinites such as at least n companies most computational linguistsa more liberal approach along the lines of kamp and reyle would predict that a quantifying np such as the subject of most computational linguists use a parser refers to the set of those computational linguists who use a parser the vp helps to determine the referent of the npthe first approach would make annotation easier to perform and the results would be likely to be more reliable as a result but it would feed less information into the information extraction tasktradeoffs of this kind are unavoidable and experimentation will be required to determine which option provides the best resultssecondly we suggest a further division of labor whereby those phenomena that are no longer accounted for in the new td are covered by other tasks for example the two nps henry higgins and president of sudsy soaps do not corefer and the relation between them should be irrelevant to coreference annotationif it is imperative that information about henry previous jobs be saved for posterity then some other annotation task has to be defined with its own very different td involving the notion of individuals having properties at certain times or intervals onlysomething analogous is true for the annotation of bound anaphorathe issue under discussion illustrates a more general pointit is now widely agreed that linguistic theorizing is sometimes insufficiently informed by observational dataconversely we would like to submit that corpusbased research is sometimes insufficiently informed by theoryit follows in our opinion that there is scope for more collaboration between theoretical and corpusbased linguists in this areathis squib attempts to be a small step in this directionthe authors wish to thank christy doran renate henschel adam kilgarriff paul piwek massimo poesio richard power and four anonymous referees for their comments on an earlier draft of this paperwe are grateful to lynette hirschman and breck baldwin for their very constructive responses to a predecessor of this paper kibble work on this paper was funded by the uk epsrc as part of the gnome and rags projects
J00-4005
on coreferring coreference in muc and related annotation schemesin this paper it is argued that coreference annotations as performed in the muc community for example go well beyond annotation of the relation of coreference properas a result it is not always clear what semantic relation these annotations are encodingthe paper discusses a number of problems with these annotations and concludes that rethinking of the coreference task is needed before the task is expandedin particular it suggests a division of labor whereby annotation of the coreference relation proper is separated from other tasks such as annotation of bound anaphora and of the relation between a subject and a predicative npit suffers however from a number of problems chief among which is the fact that the one semantic relation expressed by the scheme ident conflates a number of relations that semanticists view as distinct besides coreference proper there are identity anaphora bound anaphora and even predication
unsupervised learning of the morphology of a natural language this study reports the results of using minimum description length analysis to model unsupervised learning of the morphological segmentation of european languages using corpora ranging in size from 5000 words to 500000 words we develop a set of heuristics that rapidly develop a probabilistic morphological grammar and use mdl as our primary tool to determine whether the modifications proposed by the heuristics will be adopted or not the resulting grammar matches well the analysis that would be developed by a human morphologist in the final section we discuss the relationship of this style of mdl grammatical analysis to the notion of evaluation metric in early generative grammar this study reports the results of using minimum description length analysis to model unsupervised learning of the morphological segmentation of european languages using corpora ranging in size from 5000 words to 500000 wordswe develop a set of heuristics that rapidly develop a probabilistic morphological grammar and use mdl as our primary tool to determine whether the modifications proposed by the heuristics will be adopted or notthe resulting grammar matches well the analysis that would be developed by a human morphologistin the final section we discuss the relationship of this style of mdl grammatical analysis to the notion of evaluation metric in early generative grammarthis is a report on the present results of a study on unsupervised acquisition of morphologythe central task of morphological analysis is the segmentation of words into the components that form the word by the operation of concatenationwhile that view is not free of controversy it remains the traditional conception of morphology and the one that we shall employ hereissues of interface with phonology traditionally known as morphophonology and with syntax are not directly addressedwhile some of the discussion is relevant to the unrestricted set of languages some of the assumptions made in the implementation restrict the useful application of the algorithms to languages in which the average number of affixes per word is less than what is found in such languages as finnish hungarian and swahili and we restrict our testing in the present report to more widely studied european languagesour general goal however is the treatment of unrestricted natural languages department of linguistics university of chicago 1010 e 59th street chicago il 60637email jagoldsmithuchicagoedu1 some of the work reported here was done while i was a visitor at microsoft research in the winter of 1998 and i am grateful for the support i received therea first version was written in september 1998 and a muchrevised version was completed in december 1999this work was also supported in part by a grant from the argonne national laboratoryuniversity of chicago consortium which i thank for its supporti am also grateful for helpful discussion of this material with a number of people including carl de marcken jason eisner zhiyi chi derrick higgins jorma rissanen janos simon svetlana soglasnova hisami suzuki and jessie pinkhamas noted below i owe a great deal to the remarkable work reported in de marcken dissertation without which i would not have undertaken the work described herei am grateful as well to several anonymous reviewers for their considerable improvements to the content of this paperthe program in question takes a text file as its input and produces a partial morphological analysis of most of the words of the corpus the goal is to produce an output that matches as closely as possible the analysis that would be given by a human morphologistit performs unsupervised learning in the sense that the program sole input is the corpus we provide the program with the tools to analyze but no dictionary and no morphological rules particular to any specific languageat present the goal of the program is restricted to providing the correct analysis of words into component pieces though with only a rudimentary categorical labelingthe underlying model that is utilized invokes the principles of the minimum description length framework which provides a helpful perspective for understanding the goals of traditional linguistic analysismdl focuses on the analysis of a corpus of data that is optimal by virtue of providing both the most compact representation of the data and the most compact means of extracting that compression from the original datait thus requires both a quantitative account whose parameters match the original corpus reasonably well and a spare elegant account of the overall structurethe novelty of the present account lies in the use of simple statements of morphological patterns which aid both in quantifying the mdl account and in constructively building a satisfactory morphological grammar in addition the system whose development is described here sets reasonably high goals the reformulation in algorithmic terms of the strategies of analysis used by traditional morphologistsdeveloping an unsupervised learner using raw text data as its sole input offers several attractive aspects both theoretical and practicalat its most theoretical unsupervised learning constitutes a linguistic theory producing a completely explicit relationship between data and analysis of that dataa tradition of considerable age in linguistic theory sees the ultimate justification of an analysis a of any single language l as residing in the possibility of demonstrating that analysis a derives from a particular linguistic theory lt and that that lt works properly across a range of languages there can be no better way to make the case that a particular analysis derives from a particular theory than to automate that process so that all the linguist has to do is to develop the theoryascomputeralgorithm the application of the theory to a particular language is carried out with no surreptitious helpfrom a practical point of view the development of a fully automated morphology generator would be of considerable interest since we still need good morphologies of many european languages and to produce a morphology of a given language quotby handquot can take weeks or monthswith the advent of considerable historical text available online it is of great interest to develop morphologies of particular stages of a language and the process of automatic morphology writing can simplify this stagewhere there are no native speakers availableconsiderablya third motivation for this project is that it can serve as an excellent preparatory phase for an unsupervised grammar acquisition systemas we will see a significant proportion of the words in a large corpus can be assigned to categories though the labels that are assigned by the morphological analysis are corpus internal nonetheless the assignment of words into distinct morphologically motivated categories can be of great service to a syntax acquisition devicethe problem then involves both the determination of the correct morphological split for individual words and the establishment of accurate categories of stems based on the range of suffixes that they accept inflectional suffixes on a word which contains a stem that is followed by two or more inflectional suffixes and we would like to identify derivational prefixes and suffixeswe want to be told that in this corpus the most important suffixes are s ing ed and so forth while in the next corpus the most important suffixes are e en heit ig and so onof course the program is not a language identification program so it will not name the first as quotenglishquot and the second as quotgermanquot but it will perform the task of deciding for each word what is stem and what is affix2range of suffixes the most salient characteristic of a stem in the languages that we will consider here is the range of suffixes with which it can appearadjectives in english for example will appear with some subset of the suffixes er est ity ness etcwe would like to determine automatically what the range of the most regular suffix groups is for the language in question and rank suffix groupings by order of frequency in the corpusto give a sense of the results of the program consider one aspect of its analysis of the novel the adventures of tom sawyerand this result is consistent by and large regardless of the corpus one choosesconsider the topranked signatures illustrated in table 1 a signature is an alphabetized list of affixes that appear with a particular stem in a corpusthe present morphology learning algorithm is contained in a c program called linguistica that runs on a desktop pc and takes a text file as its inputanalyzing a corpus of 500000 words in english requires about five minutes on a pentium ii 333perfectly respectable results can be obtained from corpora as small as 5000 wordsthe system has been tested on corpora in english french german spanish italian dutch latin and russian some quantitative results are reported belowthe corpora that serve as its input are largely materials that have been obtained over the internet and i have endeavored to make no editorial changes to the files that are the inputin this paper i will discuss prior work in this area the nature of the mdl model we propose heuristics for the task of the initial splitting of words into stem and affix the resulting signatures use of mdl to search the space of morphologies results the identification of entirely spurious generalizations the grouping of signatures into larger units and directions for further improvements finally i will offer some speculative observations about the larger perspective that this work suggests and work in progress the task of automatic word analysis has intrigued workers in a range of disciplines and the practical and theoretical goals that have driven them have varied considerablysome like zellig harris view the task as an essential one in defining the nature of the linguistic analysisbut workers in the area of data compression dictionary construction and information retrieval have all contributed to the literature on automatic morphological analysisthe only general review of work in this area that i am aware of is found in langer which is ten years old and unpublishedwork in automatic morphological analysis can be usefully divided into four major approachesthe first approach proposes to identify morpheme boundaries first and thus indirectly to identify morphemes on the basis of the degree of predictability of the n 1st letter given the first n letters this was first proposed by zellig harris and further developed by others notably by hafer and weiss the second approach seeks to identify bigrams that have a high likelihood of being morpheme internal a view pursued in work discussed below by klenk langer and othersthe third approach focuses on the discovery of patterns of phonological relationships between pairs of related wordsthe fourth approach which includes that used in this paper is topdown and seeks an analysis that is globally most concisein this section we shall review some of the work that has pursued these approachesbriefly necessarilywhile not all of the approaches discussed here use no prior languageparticular knowledge i exclude from discussions those systems that are based essentially on a prior humandesigned analysis of the grammatical morphemes of a language aiming at identifying the stem and the correct parsing such is the case for example in pacak and pratt koch küstner and riidiger and wothke and schmidt with the exception of harris algorithm the complexity of the algorithms is such as to make implementation for purposes of comparison prohibitively timeconsumingat the heart of the first approach due to harris is the desire to place boundaries between letters in a word based on conditional entropy in the following sensewe construct a device that generates a finite list of words our corpus letter by letter and with uniform probability in such a way that at any point in its generation we can inquire of it what the entropy is of the set consisting of the next letter of all the continuations it might makelet us refer to this as the prefix conditional entropy clearly we may be equally interested in constructing a trie from the right edge of words which then provides us with a suffix conditional entropy in mirrorimage fashionharris himself employed no probabilistic notions and the inclusion of entropy in the formulation had to await hafer and weiss but allowing ourselves the anachronism we may say that harris proposed that local peaks of prefix conditional entropy should identify morpheme breaksthe method proposed in harris appealed to what today we would call an oracle for information about the language under scrutiny but in his 1967 article harris implemented a similar procedure on a computer and a fixed corpus restricting his problem to that of finding morpheme boundaries within wordsharris method is quite good as a heuristic for finding a good set of candidate morphemes comparable in quality to the mutual informationbased heuristic that i have used and which i describe belowit has the same problem that good heuristics frequently have it has many inaccuracies and it does not lend itself to a next step a qualitatively more reliable approximation of the correct solutionhafer and weiss explore in detail various ways of clarifying and improving on harris algorithm while remaining faithful to the original intenta brief summary does not do justice to their fascinating discussion but for our purposes their results confirm the character of the harrisian test as heuristic with harris proposal a quantitative measure is proposed and best results for morphological analysis are obtained in some cases by seeking a local maximum of prefix conditional entropy in others by seeking a value above a threshold and in yet others good results are obtained only when this measure is paired with a similar measure constructed in mirrorimage fashion from the end of the wordand then some arbitrary thresholds are selected which yield the best resultswhile no single method emerges as the best one of the best yields precision of 091 and recall of 061 on a corpus of approximately 6200 word typesthe second approach that can be found in the literature is based on the hypothesis that local information in the string of letters is sufficient to identify morpheme boundariesthis hypothesis would be clearly correct if all morpheme boundaries were between pairs of letters 1112 that never occur in that sequence morpheme internally and the hypothesis would be invalidated if conditional probabilities of a letter given the previous letter were independent of the presence of an intervening boundarythe question is where real languages distribute themselves along the continuum that stretches between these two extremesa series of publications has explored this question including janssen klenk and flenner any brief description that overlooks the differences among these publications is certain to do less than full justice to all of themthe procedure described in janssen and flenner begins with a training corpus with morpheme boundaries inserted by a human and hence the algorithm is not in the domain of unsupervised learningeach bigram is associated with a triple indicating the frequency in the training corpus of a morpheme boundary occurring to the left of between or to the right of that bigramin a test word each space between letters is assigned a score that is the sum of the relevant values derived from the training session in the word string for example the score for the potential cut between str and ing is the sum of three values the probability of a morpheme boundary after tr the probability of a morpheme boundary between r and i and the probability of a morpheme boundary before in that these numbers should give some indication of the presence of a morpheme boundary is certain for they are the sums of numbers that were explicitly assigned on the basis of overtly marked morpheme boundariesbut it remains unclear how one should proceed further with the sumas hafer and weiss discover with harris measure it is unclear whether local peaks of this measure should predict morpheme boundaries or whether a threshold should be set above which a morpheme boundary is predictedflenner and proponents of this approach have felt some freedom on making this choice in an ad hoc fashionjanssen observes that the french word linguistique displays three peaks predicting the analysis lthguistique employing a trigram modelthe reason for the strong but spurious peak after lin is that lin occurs with high frequency word finally just as gui appears with high frequency word initiallyone could respond to this observation in several ways wordfinal frequency should not contribute to wordinternal morphemefinal status or perhaps frequencies of this sort should not be addedindeed it is not clear at all why these numbers should be added they do not for example represent probabilities that can be addedjanssen notes that the other two trigrams that enter into the picture had a zero frequency of morpheme break in the desired spot and proposes that the presence of any zeros in the sum forces the sum to be 0 raising again the question of what kind of quantity is being modeled there is no scholarly tradition according to which the presence of zero in a sum should lead to a total of 0i do not have room to discuss the range of greedy affixparsing algorithms these authors explore but that aspect of their work has less bearing on the comparison with the present paper whose focus is on datadriven learningthe major question to carry away from this approach is this can the information that is expressed in the division of a set of words into morphemes be compressed into local information the answer i believe is in general negativemorphology operates at a higher level so to speak and has only weak statistical links to local sequencing of phonemes or lettersthe third approach focuses on the discovery of patterns explicating the overt shapes of related forms in a paradigmdzeroski and erjavec report on work that they have done on slovene a south slavic language with a complex morphology in the context of a similar projecttheir goal essentially was to see if an inductive logic program could infer the principles of slovene morphology to the point where it could correctly predict the nominative singular form of a word if it were given an oblique formtheir project apparently shares with the present one the requirement that the automatic learning algorithm be responsible for the decision as to which letters constitute the stem and which are part of the suffix though the details offered by dzeroski and etjavec are sketchy as to how this is accomplishedin any event they present their learning algorithm with a labeled pair of wordsa base form and an inflected formit is not clear from their description whether the base form that they supply is a surface form from a particular point in the inflectional paradigm or a more articulated underlying representation in a generative linguistic sense the former appears to be their policydzeroski and erjavec goal is the development of rules couched in traditional linguistic terms the categories of analysis are decided upon ahead of time by the programmer and each individual word is identified with regard to what morphosyntactic features it bearsthe form bolecina is marked for example as a feminine noun singular genitivein sum their project thus gives the system a good deal more information than the present project doestwo recent papers jacquemin and gaussier deserve consideration heregaussier approaches a very similar task to that which we consider and takes some similar stepshis goal is to acquire derivational rules from an inflectional lexicon thus insuring that his algorithm has access to the lexical category of the words it deals with using the terminology of the present paper gaussier considers candidate suffixes if they appear with at least two stems of length 5his first task is to infer paradigms from signatures which is to say to find appropriate clusters of signaturesone example cited is depart departure departerhe used a hierarchical agglomerative clustering method which begins with all signatures forming distinct clusters and successively collapses the two most similar clusters where similarity between stems is defined as the number of suffixes that two stems share and similarity between clusters is defined as the similarity between the two least similar stems in the respective clustershe reports a success rate of 77 but it is not clear how to evaluate this figurethe task that gaussier addresses is defined from the start to be that of derivational morphology and because of that his analysis does not need to address the problem of inflectional morphology but it is there that the difficult clustering problem arises which is how to ensure that the signatures nulls and the signature nulleds are not assigned to single clusters12 that is in english both nouns and verbs freely occur with the suffixes null and s and while ed and disambiguate the two cases it is very difficult to find a statistical and morphological basis for this knowledgejacquemin explores an additional source of evidence regarding clustering of hypothesized segmentation of words into stems and suffixes he notes that the hypothesis that there is a common stem gen in gene and genetic and a common stem express in expression and expressed is supported by the existence of small windows in corpora containing the word pair genetic expression and the word pair gene expressed as this example suggests jacquemin work is situated within the context of a desire for superior information retrievalin terms of the present study jacquemin algorithm consists of finding signatures with the longest possible stems and establishing pairs of stems that occur together in two or more windows of length 5 or lesshe tests his results on 100 random pairs discovered in this fashion placing upper bounds on the length of the suffix permitted between one and five letters and independently varying the length of the window in questionhe does not vary the minimum size of the stem a consideration that turns out to be quite important in germanic languages though less so in romance languageshe finds that precision varies from 97 when suffixes are limited to a length of one letter to 64 when suffixes may be five letters long with both figures assuming an adjacency window of two words precision falls to 15 when a window of four words is permittedjacquemin also employs the term signature in a sense not entirely dissimilar to that employed in the present paper referring to the structured set of four suffixes that appear in the two windows he notes that incorrect signatures arise in a large number of cases and suggests a quality function along the following lines stems are linked in pairs compute then the average length of the shorter stem in each pair the quality function is defined as that average divided by the length of the largest suffix in the signature reject any signature class for which that ratio is less than 10this formula and the threshold is purely empirical in the sense that there is no larger perspective that bears on determining the appropriateness of the formula or the values of the parametersthe strength of this approach clearly is its use of information that cooccurrence in a small window provides regarding semantic relatednessthis allows a more aggressive stance toward suffix identification there can be little question that the type of corpus studied lends itself particularly to this style of inference and that similar patterns would be far rarer in unrestricted text such as tom sawyer or the brown corpus13 gaussier also offers a discussion of inference of regular morphophonemics which we do not treat in the present paper and a discussion in a final section of additional analysis though without test resultsgaussier aptly calls our attention to the relevance of minimum edit distance relating two potential allomorphs and he proposes a probabilistic model based on patterns established between allomorphsin work not discussed in this paper i have explored the integration of minimum edit distance to an mdl account of allomorphy as well and will discuss this material in future workthe fourth approach to morphology analysis is topdown and seeks a globally optimal analysis of the corpusthis general approach is based on the insight that the number of letters in a list of words is greater than the number of letters in a list of the stems and affixes that are present in the original listthis is illustrated in figure 1this simple observation lends hope to the notion that we might be able to specify a relatively simple figure of merit independently of how we attempt to find analyses of particular datathis view appropriately elaborated is part of the minimum description length approach that we will discuss in detail in this paperkazakov presents an analysis in this fourth approach using a straightforward measurement of the success of a morphological analysis that we have mentioned counting the number of letters in the inventory of stems and suffixes that have been hypothesized the improvement in this count over the number of letters in the original word list is a measure of the fitness of the analysishe used a list of 120 french words in one experiment and 39 forms of the same verb in another experiment and employed what he terms a genetic algorithm to find the best cut in each wordhe associated each of the 120 words with an integer indicating where the morphological split was to be and measured the fitness of that grammar in terms of its decrease in number of total lettershe does not describe the fitness function used but seems to suggest that the single topperforming grammar of each generation is preserved all others are eliminated and the topperforming grammar is then subjected to mutationthat is in a casebycase fashion the split between stems and suffixes is modified to form a new grammarin one experiment described by kazakov the population was set to 800 and 2000 generations were modeledon a pentium 90 and a vocabulary of 120 items the computation took over eight hourswork by michael brent and carl de marcken has explored analyses of the fourth type as wellresearchers have been aware of the utility of the informationtheoretic notion of compression from the earliest days of information theory and there have been efforts to discover useful frequent chunks of letters in text such as radhakrishnan but to my knowledge brent and de marcken works were the first to explicitly propose the guiding of linguistic hypotheses by such notionsbrent work addresses the question of determining the correct morphological analysis of a corpus of english words given their syntactic category utilizing the notion of minimal encoding while de marcken addresses the problem of determining the quotbreakingquot of an unbroken stream of letters or phonemes into chunks that correspond as well as possible to our conception of words implementing a wellarticulated algorithm couched in a minimum description length framework and exploring its effects on several large corporabrent aims at finding the appropriate set of suffixes from a corpus rather than the more comprehensive goal of finding the correct analysis for each word both stem and suffix and i think it would not be unfair to describe it as a testofconcept trial on a corpus ranging in size from 500 to 8000 words while this is not a small number of words our studies below focus on corpora with on the order of 30000 distinct wordsbrent indicates that he places other limitations as well on the hypothesis space such as permitting no suffix which ends in a sequence that is also a suffix brent observation is very much in line with the spirit of the present analysis quotthe input lexicons contained thousands of nonmorphemic endings and mere dozens of morphemic suffixes but the output contained primarily morphemic suffixes in all cases but onethus the effects of nonmorphemic regularities are minimalquot brent corpora were quite different from those used in the experiments reported below his were based on choosing the n most common words in a wall street journal corpus while the present study has used large and heterogeneous sources for corpora which makes for a considerably more difficult taskin addition brent scored his algorithm solely on how well it succeeded in identifying suffixes rather than on how well it simultaneously analysed stem and suffix for each word the goal of the present studyquot brent makes clear the relevance and importance of informationtheoretic notions but does not provide a synthetic and overall measure of the length of the morphological grammar16 brent description of his algorithm is not detailed enough to satisfy the curiosity of someone like the present writer who has encountered problems that brent approach would seem certain to encounter equallyas we shall see below the central practical problem to grapple with is the fact that when considering suffixes consisting of only a single letter it is extremely difficult to get a good estimate of how many of the potential occurrences are suffixal s and how many are notas we shall suggest towards the end of this paper the only accurate way to make an estimate is on the basis of a multinornial estimate once larger suffix signatures have been establishedwithout this it is difficult not to overestimate the frequency of singleletter suffixes a result that may often in my experience deflect the learning algorithm from discovering a correct twoletter suffix goldsmith unsupervised learning of the morphology of a natural language de marcken addresses a similar but distinct task that of determining the correct breaking of a continuous stream of segments into distinct wordsthis problem has been addressed in the context of asian languages where standard orthography does not include white space between words and it has been discussed in the context of language acquisition as wellde marcken describes an unsupervised learning algorithm for the development of a lexicon using a minimum description length frameworkhe applies the algorithm to a written corpus of chinese as well as to written and spoken corpora of english and his effort inspired the work reported herede marcken algorithm begins by taking all individual characters to be the baseline lexicon and it successively adds items to the lexicon if the items will be useful in creating a better compression of the corpus in question or rather when the improvement in compression yielded by the addition of a new item to the codebook is greater than the length associated with the new item in the codebookin general a lexical item of frequency f can be associated with a compressed length of log f and de marcken algorithm computes the compressed length of the viterbibest parse of the corpus where the compressed length of the whole is the sum of the compressed lengths of the individual words plus that of the lexiconin general the addition of chunks to the lexicon will improve the compression of the corpus as a whole and de marcken shows that successive iterations add successively larger pieces to the lexiconde marcken procedure builds in a bottomup fashion looking for larger and larger chunks that are worth assigning the status of dictionary entriesthus if we look at unbroken orthographic texts in english the twoletter combination th will become the first candidate chosen for lexical status later is will achieve that status too and soon this will as wellthe entry this will not in effect point to its four letters directly but will rather point to the chunks th and is which still retain their status in the lexicon the creation of larger constituents will occasionally lead to the elimination of smaller chunks but only when the smaller chunk appears almost always in a single larger unitan example of an analysis provided by de marcken algorithm is given in taken from de marcken in which i have indicated the smallestlevel constituent by placing letters immediately next to one another and then higher structure with various pair brackets for orthographic convenience there is no theoretical significance to the difference between quotquot and quot0quot etcde marcken analysis succeeds quite well at identifying words but does not make any significant effort at identifying morphemes as suchapplying de marcken algorithm to a quotbrokenquot corpus of a language in which word boundaries are indicated provides interesting results but none that provide anything even approaching a linguistic analysis such as identification of stems and affixesthe broken character of the corpus serves essentially as an upper bound for the chunks that are postulated while the letters represent the lower boundde marcken mdlbased figure of merit for the analysis of a substring of the corpus is the sum of the inverse log frequencies of the components of the string in question the best analysis is that which minimizes that number plus the compressed length of each of the lexical items that have been hypothesized to form the lexicon of the corpusit would certainly be natural to try using this figure of merit on words in english along with the constraint that all words should be divided into exactly two piecesapplied straightforwardly however this gives uninteresting results words will always be divided into two pieces where one of the pieces is the first or the last letter of the word since individual letters are so much more common than morphemesin additionand this is less obviousthe hierarchical character of de marcken model of chunking leaves no place for a qualitative difference between highfrequency quotchunksquot on the one hand and true morphemes on the other str is a highfrequency chunk in english but it is not at all a morphemethe possessive marker on the other hand is of relatively low frequency in english but is clearly a morphememdl is nonetheless the key to understanding this problemin the next section i will present a brief description of the algorithm used to bootstrap the problem one which avoids the trap mentioned briefly in note 21this provides us with a set of candidate splittings and the notion of the signature of the stem becomes the working tool for determining which of these splits is linguistically significantmdl is a framework for evaluating proposed analyses but it does not provide a set of heuristics that are nonetheless essential for obtaining candidate analyses which will be the subject of the next two sectionsthe central idea of minimum description length analysis is composed of four parts first a model of a set of data assigns a probability distribution to the sample space from which the data is assumed to be drawn second the model can then be used to assign a compressed length to the data using familiar informationtheoretic notions third the model can itself be assigned a length and fourth the optimal analysis of the data is the one for which the sum of the length of the compressed data and the length of the model is the smallestthat is we seek a minimally compact specification of both the model and the data simultaneouslyaccordingly we use the conceptual vocabulary of information theory as it becomes relevant to computing the length in bits of various aspects of the morphology and the data representationlet us suppose that we know the correct analysis of a set of words and we wish to create a model using that knowledgein particular we know which words have no morphological analysis and for all the words that do have a morphological analysis we know the final suffix of the wordan mdl model can most easily be conceptualized if we encode all such knowledge by means of lists see figure 2in the present case we have three lists a list of stems of suffixes and of signatureswe construct a list of the stems of the corpus defined as the set of the unanalyzed words plus the material that precedes the final suffix of each morphologically analyzed wordwe also construct a list of suffixes that occur with at least one stemfinally each stem is empirically associated with a set of suffixes we call this set the stem signature and we construct a third list consisting of the signatures that appear in this corpusthis third list however contains no letters but rather pointers to stems and suffixeswe do this in one sense because our goal is to construct the smallest morphology and in general a pointer requires less information than an explicit set of lettersbut in a deeper sense it is the signatures whose compactness provides the explicit measurement of the conciseness of the entire analysisnote that by construction each stem is associated with exactly one signaturesince stem suffix and signature all begin with s we opt for using t to represent a stem f to represent a suffix and a to represent a signature while the uppercase t f e represent the sets of stems suffixes and signatures respectivelythe number of members of such a set will be represented etc while the number of occurrences of a stem suffix etc will be represented as t f etcthe set of all words in the corpus will be represented as w hence the length of the corpus is w and the size of the vocabulary is note the structure of the signatures in figure 2logically a signature consists of two lists of pointers one a list of pointers to stems the other a list of pointers to suffixesto specify a list of length n we must specify at the beginning of the signature that n items will follow and this requires just slightly more than log2 n bits to do i will use the notation a to indicate this functiona pointer to a stem t in turn is of length log prob a basic principle of information theory hence the length of a signature is the sum of the log probabilities of its stems plus that of its suffixes plus the number of bits it takes to specify the number of its stems and suffixes using the a functionwe will return in a moment to how we determine the probabilities of the stems and suffixes looking ahead it will be the empirical frequencylet us consider the length of stem list t as we have already observed its length is athis is the length of the information specifying how long the list isplus the length of each stem specificationin most of our work we make the assumption that the length of a stem is the number of letters in it weighted by the factor log 26 converting to binary bits in a language with 26 lettersthe same reasoning holds for the suffix list f its length is x plus the length of each suffix which we may take to be the total number of letters in the suffix times log 26we return to the question of how long the pointer to a stem or suffix isthe probability of a stem is its frequency ie the total number of words in the corpus corresponding to the words whose analysis includes the stem in question the probability of a suffix is defined in parallel fashionusing w to indicate all the words of the corpus we may say that the length of a pointer to a stem t is of length a pointer to suffix f is of length 18 this is a reasonable and convenient assumption but it may not be precise enough for all worka more refined measure would take the length of a letter to be 1 times the binary log of its frequencya still more refined measure would base the probability of a letter on bigram context this matters for english where stem final t is very commonin addition there is information in the linear order in which the letters are stored roughly equal to for a string of length n this is an additional consideration in an mdl analysis of morphology pressing in favor of breaking words into morphemes when possiblegoldsmith unsupervised learning of the morphology of a natural language and a pointer to a signature a is of length we have now settled the question of how to determine the length of our initial model we next must determine the probability that the model assigns to each word in the corpus and armed with that knowledge we will be able to compute the compressed length of the corpusthe morphology assigns a probability to each word w as the product of the probability of w signature times w stem given its signature and w suffix given its signature prob prob prob prob where a is the signature associated with t a sigthus while stems and suffixes which are defined relative to a particular morphological model are assigned their empirical frequency as their probability words are assigned a probability based on the model one which will always depart from the empirical frequencythe compression to the corpus is thus worse than would be a compression based on word frequency alone or to put it another way the morphological analysis in which all words are unanalyzed is the analysis in which each word is trivially assigned its own empirical frequency but this decrease in compression that comes with morphological analysis is the price willingly paid for not having to enter every distinct word in the stem list of the morphologysummarizing the compressed length of the corpus is where we have summed over the words in the corpus and a is the signature to which word w is assignedthe compressed length of the model is the length of the stem list the suffix list and the signature listthe length in bits of the stem list is and the length of the suffix list is where lt5p00 is the measurement of the length of a string of letters in bits which we take to be log2 26 times the number of letters the length of the signature list is where l is the length of signature aif the set of stems linked to signature a is t and the set of suffixes linked to signature a is f then it is no doubt easy to get lost in the formalism so it may be helpful to point out what the contribution of the additional structure accomplisheswe observed above that the mdl analysis is an elaboration of the insight that the best morphological analysis of a corpus is obtained by counting the total number of letters in the list of stems and suffixes according to various analyses and choosing the analysis for which this sum is the least this simple insight fails rapidly when we observe in a language such as english that there are a large number of verb stems that end in t verbs appear with a null suffix with the suffixes s ed and ingbut once we have 11 stems ending in t the naive lettercounting approach will judge it a good idea to create a new set of suffixes t ted ts and ting because those 10 letters will allow us to remove 11 or more letters from the list of stemsit is the creation of the lists notably the signature list and an information cost which increases as probability decreases that overcomes that problemcreating a new signature may save some information associated with the stem list in the morphology but since the length of pointers to a signature a is log freq the length of the pointers to the signatures for all of the words in the corpus associated with the old signature or the new signature will be longer than the length of the pointers to a signature whose token count is the sum of the token count of the two combined ie the model presented above is too simple in that it underestimates the gain achieved by morphological analysis in case the word that is analyzed is also a stem of a larger wordfor example if a corpus contains the words work and working then morphological analysis will allow us to dispense with the form working it is modeled by the stem work and the suffixes 0 and ingif the corpus also includes workings the analysis workings additionally lowers the cost of the stem workingclearly we would like stems to be in turn analyzable as stems suffixesimplementing this suggestion involves the following modifications each pointer to a stem must contain a flag indicating whether what follows is a pointer to a simple member of the stem list or a triple pointer to a signature stem and suffixin the latter case which would be the case for the word iworkingls the pointer to the stem consists of a triple identical to the signature for the word working the number of words in the corpus has now changed in that the word workingls now contains two words not onewe will need to distinguish between counts of a word w where w is a freestanding word and counts where it is part of a larger word we shall refer to the latter class as secondary countsin order to simplify computation and exposition we have adopted the convention that the total number of words remains fixed even when nested structure is posited by the morphology thus forcing the convention that counts are distributed in a nonintegral fashion over the two or more nested word structures found in complex wordswe consider the more complex case in the appendixgoldsmith unsupervised learning of the morphology of a natural language we may distinguish between those words like work or working whose immediate analysis involves a stem appearing in the stem list and those whose analysis like workings involves recursive structure as we have noted every stem entry in a signature begins with a flag indicating which kind of stem it is and this flag will be of length for simple stems and of length for complex stemswe also keep track separately of the total number of words in the corpus that are morphologically analyzed and refer to this set as wa this consists of all words except those that are analyzed as having no suffix in below did not appear independently as a freestanding word in the corpus we will refer to these inferred words as being quotvirtualquot words with virtual countsmdl thus provides a figure of merit that we wish to minimize and we will seek heuristics that modify the morphological analysis in such a fashion as to decrease this figure of merit in a large proportion of casesin any given case we will accept a modification to our analysis just in case the description length decreases and we will suggest that this strategy coincides with traditional linguistic judgment in all clear casesthe mdl model designed in the preceding section will be of use only if we can provide a practical means of creating one or more plausible morphologies for a given corpusthat is we need bootstrapping heuristics that enable us to go from a corpus to such a morphologyas we shall see it is not in fact difficult to come up with a plausible initial morphology but i would like to consider first an approach which though it might seem like the most natural one to try fails and for an interesting reasonthe problem we wish to solve can be thought of as one suited to an expectationmaximization approach along such a line each word w of length n would be initially conceived of as being analyzed in n different ways cutting the word into stem suffix after i letters 1 i n with each of these n analyses being assigned probability mass of w nw goldsmith unsupervised learning of the morphology of a natural language that probability mass is then summed over the resulting set of stems and suffixes and on successive iterations each of the n cuts into stem suffix is weighted by its probability that is if the ith cut of word w of length 1 cuts it into a stem t of length i and suffix of length 1 i then the probability of that cut is defined as where wjk refers to the substring of w from the jth to the kth letterprobability mass for the stem and the suffix in each such cut is then augmented by an amount equal to the frequency of word w times the probability of the cutafter several iterations estimated probabilities stabilize and each word is analyzed on the basis of the cut with the largest probabilitythis initially plausible approach fails because it always prefers an analysis in which either the stem or the suffix consists of a single lettermore importantly the probability that a sequence of one or more wordfinal letters is a suffix is very poorly modeled by the sequence frequencyto put the point another way even the initial heuristic analyzing one particular word must take into account all of the other analyses in a more articulated way than this particular approach doesi will turn now to two alternative heuristics that succeed in producing an initial morphological analysis it seems likely that one could construct a number of additional heuristics of this sortthe point to emphasize is that the primary responsibility of the overall morphology is not that of the initial heuristic but rather of the mdl model described in the previous sectionthe heuristics described in this section create an initial morphology that can serve as a starting point in a search for the shortest overall description of the morphologywe deal with that process in section 5a heuristic that i will call the takeallsplits heuristic and which considers all cuts of a word of length 1 into stemsuffix wi w1 where 1 i 1 much like the them approach mentioned immediately above works much more effectively if the probability is assigned on the basis of a boltzmann distribution see belowthe function h in assigns a value to a split of word w of length 1 w1 h does not assign a proper distribution we use it to assign a probability to the cut of w into w1 w1 as in clearly the effect of this model is to encourage splits containing relatively long suffixes and stems21 it is instructive to think about why this should be soconsider a word such as diplomacyif we cut the word into the pieces diplomac y its probability is freq freq and constrast that value with the corresponding values of two other analyses freq freq and freq freq now the ratio of the frequency of words that begin with diploma and those that begin with diplomac is less than 3 while the ratio of the frequency of words that end in y and those that end in cy is much greaterin graphical terms we might note that tries based on forward spelling have by far the greatest branching structure early in the word while tries based on backward spelling have the greatest branching structure close to the root node which is to say at the end of the word where wiv for each word we note what the best parse is that is which parse has the highest rating by virtue of the hfunctionwe iterate until no word changes its optimal parse which empirically is typically less than five iterations on the entire lexicon22 we now have an initial split of all words into stem plus suffixeven for words like this and stomach we have such an initial splitthe second approach that we have employed provides a much more rapid convergence on the suffixes of a languagesince our goal presently is to identify wordfinal suffixes we assume by convention that all words end with an endofword symbol and we then tally the counts of all ngrams of length between two and six letters that appear word finallythus for example the word elephant contains one occurrence of the wordfinal bigram t one occurrence of the wordfinal trigram nt and so forth we stop at 6grams on the grounds that no grammatical morphemes require more than five letters in the languages we are dealing withwe also require that the ngram in question be a proper substring of its wordwe employ as a rough indicator of likelihood that such an ngram n1 n2 nk is a grammatical morpheme the measure total count of kgrams log nin21 nk which we may refer to as the weighted mutual informationwe choose the top 100 ngrams on the basis of this measure as our set of candidate suffixeswe should bear in mind that this ranking will be guaranteed to give incorrect results as well as correct ones for example while ing is very highly ranked in an english corpus ting and ng will also be highly ranked the former because so many stems end in t the latter because all ings end in ng but of the three only ing is a morpheme in englishwe then parse all words into stem plus suffix if such a parse is possible using a suffix from this candidate seta considerable number of words will have more than one such parse under those conditions and we utilize the figure of merit described in the preceding section to choose among those potential parsesregardless of which of the two approaches we have taken our task now is to decide which splits are worth keeping which ones need to be dropped and which ones need to be modified23 in addition if we follow the takeallsplits approach we have many 22 experimenting with other functions suggests empirically that the details of our choices for a figure of merit and the distribution reported in the text are relatively unimportantas long as the measurement is capable of ensuring that the cuts are not strongly pushed towards the periphery the results we get are robust23 various versions of harris method of morpheme identification can be used as wellharris approach has the interesting characteristic that it is possible to impose restrictions that improve its precision while at the same time worsening its recall to unacceptably low levelsin work in progress we are exploring the consequences of using such an initial heuristic with significantly higher precision while depending on mdl considerations to extend the recall of the entire morphologygoldsmith unsupervised learning of the morphology of a natural language splits which are splits between prefix and stem words beginning with de will at this point all be split after the initial deso there is work to be done and for this we return to the central notion of the signatureeach word now has been assigned an optimal split into stem and suffix by the initial heuristic chosen and we consider henceforth only the best parse for that word and we retain only those stems and suffixes that were optimal for at least one wordfor each stem we make a list of those suffixes that appear with it and we call an alphabetized list of such suffixes the stem signature we may think of it as a miniparadigmfor example in one english corpus the stems despair pity appeal and insult appear with the suffixes ing and inglyhowever they also appear as freestanding words and so we use the word null to indicate a zero suffixthus their signature is nullinginglysimilarly the stems assist and ignor are assigned the signature anceanteding in a certain corpusbecause each stem is associated with exactly one signature we will also use the term signature to refer to the set of affixes along with the associated set of stems when no ambiguity ariseswe establish a data structure of all signatures keeping track for each signature of which stems are associated with that signatureas an initial heuristic subject to correction below we discard all signatures that are associated with only one stem and all signatures with only one suffixthe remaining signatures we shall call regular signatures and we will call all of the suffixes that we find in them the regular suffixesas we shall see the regular suffixes are not quite the suffixes we would like to establish for the language but they are a very good approximation and constitute a good initial analysisthe nonregular signatures produced by the takeallsplits approach are typically of no interest as examples such as cheerialerialsrimonyronsuring and elezedncereuponther illustratethe reader may identify the single english pseudostem that occurs with each of these signaturesthe regular signatures are thus those that specify exactly the entire set of suffixes used by at least two stems in the corpusthe presence of a signature rests upon the existence of a structure as in where there are at least two members present in each column and all combinations indicated in this structure are present in the corpus and in addition each stem is found with no other suffixquot if we have a morphological pattern of five suffixes let us say and there is a large set of stems that appear with all five suffixes then that set will give rise to a regular signature with five suffixal membersthis simple pattern would be perturbed by the extraneous fact that a stem appearing with these suffixes should also appear with some other suffix and if all stems that associate with these five suffixes appear with idiosyncratic suffixes then the signature of those five suffixes would never emergein general however in a given corpus a good proportion of stems appears with a complete set of what a grammarian would take to be the paradigmatic set of suffixes for its class this will be neither the stems with the highest nor the stems with the lowest frequency but those in betweenin addition there will be a large range of words with no acceptable morphological analysis which is just as it should be john stomach the and so forthto get a sense of what are identified as regular signatures in a language such as english let us look at the results of a preliminary analysis in table 2 of the 86976 words of the adventures of tom sawyer by mark twainthe signatures in table 2 are ordered by the breadth of a signature defined as followsa signature a has both a stem count and an affix count and we use log log as a rough guide to the centrality of a signature in the corpusthe suffixes identified are given in table 3 for the final analysis of this textin this corpus of some 87000 words there are 202 regular signatures identified through the procedure we have outlined so far and 803 signatures composed entirely of regular suffixes the top five signatures are nulleding eeding nulls nulleds and nulledings the third is primarily composed of noun stems while the others are verb stemsnumber 7 nullly identifies 105 words of which all are adjectives except for sal name love shape and perhaps earththe results in english are typical of the results in the other european languages that i have studiedthese results then are derived by the application of the heuristics described abovethe overall sketch of the morphology of the language is quite reasonable already in its outlinesnevertheless the results when studied up close show that there remain a good number of errors that must be uncovered using additional heuristics and evaluated using the mdl measurethese errors may be organized in the following ways in the next section we discuss some of the approaches we have taken to resolving these problemssignature nulllyst for stems ence such as safebehold deaf weak sunk etc ily error analyzed lely for ey we can use the description length of the grammar formulated in and to evaluate any proposed revision as we have already observed note the description length of the grammar and the compressed corpus perform a modification of the grammar recompute the two lengths and see if the modification improved the resulting description lengthgoldsmith unsupervised learning of the morphology of a natural language following the morphological analysis of words described in the previous section suffixes are checked to determine if they are spurious amalgams of independently motivated suffixes ments is typically but wrongly analyzed as a suffixupon identification of such suffixes as spurious the vocabulary containing these words is reanalyzedfor example in tom sawyer the suffix ings is split into ing and s and thus the word beings is split into being plus s the word being is of course already in the lexiconthe word breathings is similarly reanalyzed as breathing plus s but the word breathing is not found in the lexicon it is entered with the morphological analysis breathingwords that already existed include chafing dripping evening feeling and flogging while new quotvirtualquot words include belonging bustling chafing and fasteningthe only new word that arises that is worthy of notice is jing derived from the word jings found in twain expression by jingsin a larger corpus of 500000 words 64 suffixes are tested for splitting and 31 are split including tions ists ians ened lines ents and ivelynote that what it means to say that quotsuffixes are checked to see if they are spurious amalgamsquot is that each suffix is checked to see if it is the concatenation of two independently existing suffixes and then if that is the case the entire description length of the corpus is recomputed under the alternative analysis the reanalysis is adopted if and only if the description length decreasesthe same holds for the other heuristics discussed immediately belowfollowing this stage the signatures are studied to determine if there is a consistent pattern in which all suffixes from the signature begin with the same letter or sequence of letters as in tetingtssuch signatures are evaluated to determine if the description length improves when such a signature is modified to become eings etcit is necessary to precede this analysis by one in which all signatures are removed which consist of a single suffix composed of a single letterthis set of signatures includes for example the singleton signature e which is a perfectly valid suffix in english however if we permit all words ending in e but having no other related forms to be analyzed as containing the suffix e then the e will be inappropriately highly valued in the analysisin the next stage of analysis triage signatures containing a small number of stems or a single suffix are explored in greater detailthe challenge of triage is to determine when the data is rich and strong enough to support the existence of a linguistically real signaturea special case of this is the question of how many stems must exist to motivate the existence of a signature when the stems only appear with a single suffixfor example if a set of words appear in english ending with hood should the morphological analysis split the words in that fashion even if the stems thereby created appear with no other suffixesand at the other extreme what about a corpus which contains the words look book loot and bootdoes that data motivate the signature 1k for the stems boo and loothe matter is rendered more complex by a number of factorsthe length of the stems and suffixes in question clearly plays a role suffixes of one letter are all other things being equal suspicious the pair of stems loo and boo appearing with the signature k t does not provide an example of a convincing linguistic patternon the other hand if the suffix is long enough even one stem may be enough to motivate a signature especially if the suffix in question is otherwise quite frequent in the languagea single stem occurring with a single pair of suffixes may be a very convincing signature for other reasons as wellin italian for example even in a relatively small corpus we are likely to find a signature such as aandoanoareataateatiatoazione6 with several stems in it once we are sure that the 10suffix signature is correct then the discovery of a subsignature along with a stem is perfectly natural and we would not expect to find multiple stems associated with each of the occurring combinationsand a signature may be quotcontaminatedquot so to speak by a spurious intrudera corpus containing rag rage raged raging and rags gave rise to a signature nulleedings for the stem ragit seems clear that we need to use information that we have obtained regarding the larger robust patterns of suffix combinations in the language to influence our decisions regarding smaller combinationswe return to the matter of triage belowwe are currently experimenting with methods to improve the identification of related stemscurrent efforts yield interesting but inconclusive resultswe compare all pairs of stems to determine whether they can be related by a simple substitution process ignoring those pairs that are related by virtue of one being the stem of the other already within the analysiswe collect all such rules and compare by frequencyin a 500000word english corpus the top two such pairs of 11 relationships are 46 stems related by a final d s alternation including intrudintrus apprendendapprenhens providprovis suspendsus pens and eludelus and 43 stems related by a final i y alternation including relirely ordinariordinary decridecry supplisupply and accompaniaccompanythis approach can quickly locate patterns of allomorphy that are well known in the european languages however we do not currently have a satisfactory means of segregating meaningful cases such as these from the spurious cases of stems whose forms are parallel but ultimately not relatedon the whole the inclusion of the strategies described in the preceding sections leads to very good but by no means perfect resultsin this section we shall review some of these results qualitatively some quantitatively and discuss briefly the origin of the incorrect parseswe obtain the most striking result by looking at the top list of signatures in a language if we have some familiarity with the language it is almost as if the textbook patterns have been ripped out and placed in a chartas these examples suggest the large morphological patterns identified tend to be quite accurately depictedto illustrate the results on european languages we include signatures found from a 500000word corpus of english a 350000word corpus of french don quijote which contains 124716 words of spanish a 125000word corpus of latin and 100000 words and 1000000 words of italian the 500000word corpus of english contains slightly more than 30000 distinct wordsto illustrate the difference of scale that is observed depending on the size of the corpus compare the signatures obtained in italian on a corpus of 100000 words and a corpus of 1000000 words when one sees the rich inflectional pattern emerging as with the example of the 10 suffixes on firstconjugation stems one cannot but be struck by the grammatical detail that is emerging from the study of a larger corpusturning to french we may briefly inspect the top 10 signatures that we find in a 350000word corpus in table 5it is instructive to consider the signature aaientaitante entereserenteeees which is ranked ninth among signaturesit contains a large part of the suffixal pattern from the most common regular conjugation the first conjugationwithin the scope of the effort covered by this project the largescale generalizations extracted about these languages appear to be quite accurate it is equally important to take a finergrained look at the results and quantify themto brachi carmel cenacul damn evangeli hysop lectul liban offici ole 5 iisoorumosumus 8 aaeamasiisoorumosumus do this we have selected from the english and the french analyses a set of 1000 consecutive words in the alphabetical list of words from the corpus and divided them into distinct sets regarding the analysis provided by the present algorithmsee tables 10 and 11the first category of analyses labeled good is selfexplanatory in the case of most words and many of the errors are equally easy to identify by eye quite honestly i was surprised how many words there were in which it was difficult to say what the correct analysis wasfor example consider the pair abolition and abolishthe words are clearly related and abolition clearly has a suffix but does it have the suffix ion tion or ition and does abolish have the suffix ish or shit is hard to saygood 833 833 wrong analysis 61 61 failed to analyze 42 42 spurious analysis 64 64 in a case of this sort my policy for assigning success or failure has been influenced by two criteriathe first is that analyses are better insofar as they explicitly relate words that are appropriately parallel in semantics as in the abolishabolition case thus i would give credit to either the analysis abolitionabolish or the analysis abolitionabolishthe second criterion is a bit more subtleconsider the pair of words alumnus and alumnishould these be morphologically analyzed in a corpus of english or rather should failure to analyze them be penalized for this morphology algorithmmy principle has been that if i would have given the system additional credit by virtue of discovering that relationship i have penalized it if it did not discover it that is a relatively harsh criterion to apply to be sureshould proper names be morphologically analyzedthe answer is often unclearin the 500000 word english corpus we encounter alex and alexis and the latter is analyzed as alexisi have scored this as correct much as i have scored as correct the analyses of alexander and alexandreon the other hand the failure to analyze alexeyeva despite the presence of alex and alexei does not seem to me to be an error while the analysis anabel has been scored as an error but johnson have not been treated as errorsdifficult to classify too is the treatment of words such as abetabettedabettingthe present algorithm selects the uniform stem abet in that case assigning the signature nulltedtingultimately what we would like to have is a means of indicating that the doubled t is predictable and that the correct signature is nulledingat present this is not implemented and i have chosen to mark this as correct on the grounds that it is more important to identify words with the same stem than to identify the correct signaturestill unclear cases remain for example consider the words accompaniedaccompanimentaccompanistthe word accompany does not appear as such but the stem accompany is identified in the word accompanyingthe analysis accompanist fails to identify the suffix ist but it will successfully identify the stem as being the same as the one found in accompanied and accompaniment which it would not have done if it had associated the i with the suffixi have in any event marked this analysis as wrong but without much conviction behind the decisionsimilarly the analysis of french putative stem embelli with suffixes erentt passes the low test of treating related words with the same stem but i have counted it as in error on the grounds that the analysis is unquestionably one letter off from the correct traditional analysis of secondconjugation verbsthis points to a more general issue regarding french morphology which is more complex than that of englishthe infinitive ecrire to write would ideally be analyzed as a stem ecr plus a derivational suffix i followed by an infinitival suffix resince the derivational suffix i occurs in all its inflected forms it is not unreasonable to find an analysis in which the i is integrated into the stem itselfthis is what the algorithm does employing the stem ecri for the words ecrire and ecrit ecrit in turn is the stem for ecrite ecrite ecrites ecrits and ecriturean alternate stem form ecriv is used for past tense forms with the suffixes aient ait ant irent itthe algorithm does not make explicit the connection between these two stems as it ideally wouldthus in the tables good indicates the categories of words where the analysis was clearly right while the incorrect analyses have been broken into several categorieswrong analysis is for bimorphemic words that are analyzed but incorrectly analyzed by the algorithmfailed to analyze are the cases of words that are bimorphemic but 29 my inability to determine the correct morphological analysis in a wide range of words that i know perfectly well seems to me to be essentially the same response as has often been observed in the case of speakers of japanese chinese and korean when forced to place word boundaries in email romanizations of their languageultimately the quality of a morphological analysis must be measured by how well the algorithm handles the clear cases how well it displays the relationships between words perceived to be related and how well it serves as the language model for a stochastic morphology of the language in questiongoldsmith unsupervised learning of the morphology of a natural language for which no analysis was provided by the algorithm and spurious analysis are the cases of words that are not morphologically complex but were analyzed as containing a suffixfor both english and french correct performance is found in 83 of the words details are presented in tables 10 and 11for english these figures correspond to precision of 829 859 and recall of 829 9048triage as noted above the goal of triage is to determine how many stems must occur in order for the data to be strong enough to support the existence of a linguistically real signaturemdl provides a simple but not altogether satisfactory method of achieving this endusing mdl for this task amounts to determining whether the total description length decreases when a signature is eliminated by taking all of its words and eliminating their morphological structure and reanalyzing the words as morphologically simple this is how we have implemented it in any event one could well imagine a variant under which some or all subparts of the signature that comprised other signatures were made part of those other signaturesfor example the signature nullinely is motivated just for the stem justunder the former triage criterion justine and justly would be treated as unanalyzed words whereas under the latter just and justly would be made members of the nullly signature and just and justine might additionally be treated as comprising parts of the signature nulline along with bernard gerald eng capitol elephant def and sup our mdlbased measure tests the goodness of a signature by testing each signature o to see if the analysis is better when that signature is deletedthis deletion entails treating the signature words as members of the signature of unanalyzed words each word member of the signature however now becomes a separate stem with all of the increase in pointer length that that entails as well as increase in letter content for the stem componentone may draw the following conclusions i believe from the straightforward application of such a measureon the whole the effects are quite good but by no means as close as one would like to a human decisions in a certain number of casesin addition the effects are significantly influenced by two decisions that we have already discussed the information associated with each letter and the decision as to whether to model suffix frequency based solely on signatureinternal frequences or based on frequency across the entire morphologythe greater the information associated with each letter the more worthwhile morphology is when suffix frequencies are based on the frequency of the suffixes in the entire lexicon rather than conditionally within the signature in question the loss of a signature entails a hit on the compression of all other words in the lexicon that employed that suffix hence triage is less dramatic under that modeling assumptionconsider the effect of this computation on the signatures produced from a 500000word corpus of englishafter the modifications discussed to this point but before triage there were 603 signatures with two or more stems and two or more suffixes and there were 1490 signatures altogetherapplication of triage leads to the loss of only 240 signaturesthe singlesuffix signatures that were eliminated were ide it rs he ton o and ie all of which are spurioushowever a number of signatures that should not have been lost were eliminated most strikingly nullness with 51 good analyses nullful with 18 good analyses and nullish with only 8 analysesmost of the cases eliminated however were indeed spuriouscounting only those signatures that involves suffixes and that were in fact correct the percentage of the words whose analysis was incorrectly eliminated by triage was 219 interestingly in light of the discussion on results above one of the signatures that was lost was ius for the latin plural also eliminated was nullnt because maximizing correct results is as important as testing the mdl model proposed here i have also utilized a triage algorithm that departs from the mdlbased optimization in certain cases which i shall identify in a momenti believe that when the improvements identified in section 10 below are made the purely mdlbased algorithm will be more accurate that prediction remains to be tested to be sureon this account we discard any signature for which the total number of stem letters is less than five and any signature consisting of a single oneletter suffix we keep then only signatures for which the savings in letter counts is greater than 15 15 is chosen empiricallyas we noted briefly above the existence of a regular pattern of suffixation with n distinct suffixes will generally give rise to a large set of stems displaying all n suffixes but it will also give rise in general to stems displaying most possible combinations of subsets of these suffixesthus if there is a regular paradigm in english consisting of the suffixes null s ing and ed we expect to find stems appearing with most possible combinations of these suffixes as wellas this case clearly shows not all such predicted subpatterns are merely partially filled paradigmsof stems appearing with the signature nulls some are verbs but the overwhelming majority of course are nounsin the present version of the algorithm no effort is made to directly relate signatures to one another and this has a significant and negative impact on performance because analyses in which stems are affiliated with highfrequency signatures are more highly valued than those in which they are affiliated with lowfrequency signatures it is thus of capital importance not to underestimate the total frequency of a signaturewhen two signatures as we have defined them here are collapsed there are two major effects on the description length pointers to the merged signature are shorterleading to a shorter total description lengthbut in general predicted frequencies of the corn30 as long as we keep the total number of words fixed the global task of minimizing description length can generally be obtained by the local strategy of finding the largest cohort for a group of forms to associate with if the same data can be analyzed in two ways with the data forming groups of sizes q in one case and a2 in the other maximal compression is obtained by choosing the case for which is the greatestgoldsmith unsupervised learning of the morphology of a natural language posite words are worse than they were leading to a poorer description in practice the collapsing of signatures is rejected by the mdl measure that we have implemented herein work in progress we treat groups of signatures as parts of larger groups called paradigmsa paradigm consisting of the suffixes nulledings for example includes all 15 possible combinations of these suffixeswe can in general estimate the number of stems we would expect to appear with zero counts for one or more of the suffixes given a frequency distribution such as a multinomial distribution for the suffixesin this way we can establish some reasonable frequencies for the case of stems appearing in a corpus with only a single suffixit appears at this time that the unavailability of this information is the single most significant because of inaccuracies in the present algorithmit is thus of considerable importance to get a handle on such estimatesa number of practical questions remain at this pointthe most important are the following principles at work relating pairs of stems as in english many stems are related to another stem with a doubled consonant we have been reasonably successful in identifying such semiregular morphology and will report this in a future publicationthere is a soft line between the discovery of related stems on the one hand and the parsing of a word into several suffixesfor example in the case mentioned briefly above for french it is not unreasonable to propose two stems for to write ecri and ecriv each used in distinct formsit would also be reasonable in this case to analyze the latter stem ecriv as composed of ecri plus a suffix v although in this case there are no additional benefits to be gained from the more finegrained analysis31 in particular consider a paradigm with a set 0 of suffixeswe may represent a subsignature of that signature as a string of os and is indicating whether the ith suffix is contained in the subsignatureif a stem t occurs t times then the probability that it occurs without a particular suffix is t the probability that it occurs without all of the suffixes missing from the particular subsignature b bk is and the probability that the particular subsignature b will arise at all is the sum of those values over all of the stems in the signature thus all that is necessary is to estimate the hidden parameters of the frequencies of the individual suffixes in the entire paradigmsee the following note as well32 there may appear to be a contradiction between this observation about paradigms and the statement in the preceding paragraph that mdl rejects signature mergersbut there is no contradictionthe rejection of signature mergers is performed by the model which posits that frequencies of suffixes inside a signature are based only on suffix frequencies of the stems that appear with exactly the same set of suffixes in the corpusit is that modeling assumption that needs to be dropped and replaced by a multinomialbased frequency prediction based on counts over the 2n 1 signatures belonging to each paradigm of length n 2identifying paradigms from signatureswe would like to automatically identify nulleding as a subcase of the more general nulledingsthis is a difficult task to accomplish well as english illustrates for we would like to be able to determine that nulls is primarily a subcase of nulls and not of nulleds3determining the relationship between prefixation and suffixationthe system currently assumes that prefixes are to be stripped off the stem that has already been identified by suffix strippingin future work we would like to see alternative hypotheses regarding the relationship of prefixation and suffixation tested by the mdl criterion4identifying compoundsin work reported in goldsmith and reutter we have explored the usefulness of the present system for determining the linking elements used in german compounds but more work remains to be done to identify compounds in generalhere we run straight into the problem of assigning very short strings a lower likelihood of being words than longer stringsthat is it is difficult to avoid positing a certain number of very short stems as in english m and an the first because of pairs such as me and my the second because of pairs such as an and any but these facts should not be taken as strong evidence that man is a compound5as noted at the outset the present algorithm is limited in its ability to discover the morphology of a language in which there are not a sufficient number of words with only one suffix in the corpusin work in progress we are developing a related algorithm that deals with the 33 we noted in the preceding section that we can estimate the likelihood of a subsignature assuming a multirtomial distributionwe can in fact do better than was indicated there in the sense that for a given observed signature a whose suffixes constitute a subset of a larger signature a we can compute the likelihood that a is responsible for the generation of aquot where cb are the frequencies associating with each of the suffixes in a and are the counts of the corresponding suffixes in the observed signature a pi from stirling approximationif we normalize the cs to form a distribution and denote these by di then this can be simply expressed in terms of the kullbackleibler distance goldsmith unsupervised learning of the morphology of a natural language more general casein the more general case it is even more important to develop a model that deals with the layered relationship among suffixes in a languagethe present system does not explicitly deal with these relationships for example while it does break up ments into ment and s it does not explicitly determine which suffixes s may attach to etcthis must be done in a more adequate version6in work in progress we have added to the capability of the algorithm the ability to posit suffixes that are in part subtractive morphemesthat is in english we would like to establish a single signature that combines nulledings and eedesing we posit an operator which deletes a preceding character x and with the mechanism we can establish a single signature nulledings composed of familiar suffixes null and s plus two suffixes ed and ing which delete a preceding e if one is present11conclusion linguists face at the present time the question of whether and to what extent informationtheoretic notions will play a significant role in our understanding of linguistic theory over the years to come and the present system perhaps casts a small ray of light in this areaas we have already noted mdl analysis makes clear what the two areas are in which an analysis can be judged it can be judged in its ability to deal with the data as measured by its ability to compress the data and it can be judged on its complexity as a theorywhile the former view is undoubtedly controversial when viewed from the light of mainstream linguistics it is the prospect of being able to say something about the complexity of a theory that is potentially the most excitingeven more importantly to the extent that we can make these notions explicit we stand a chance of being able to develop an explicit model of language acquisition employing these ideasa natural question to ask is whether the algorithm presented here is intended to be understood as a hypothesis regarding the way in which human beings acquire morphologyi have not employed in the design of this algorithm a great deal of innate knowledge regarding morphology but that is for the simple reason that knowledge of how words divide into subpieces is an area of knowledge which no one would take to be innate in any direct fashion if sanity is parsed as san ity in one language it may perfectly well be parsed as sa nity in another languagethat is while passion may flame disagreements between partisans of universal grammar and partisans of statistically grounded empiricism regarding the task of syntax acquisition the task which we have studied here is a considerably more humble one which must in some fashion or other be figured out by grunt work by the language learnerit thus allows us a much sharper image of how powerful the tools are likely to be that the language acquirer brings to the taskand does the human child perform computations at all like the ones proposed herefrom most practical points of view nothing hinges on our answer to this question but it is a question that ultimately we cannot avoid facingreformulated a bit one might pose the question does the young language learnerwho has access not only to the spoken language but perhaps also to the rudiments of the syntax and to the intended meaning of the words and sentencesdoes the young learner have access to additional information that simplifies the task of morpheme identificationit is the belief that the answer to this question is yes that drives the intuition that an mdlbased analysis of the present sort is an unlikely model of human language acquisitionbut i think that such a belief is very likely mistakenknowledge of semantics and even grammar is unlikely to make the problem of morphology discovery significantly easierin surveying the various approaches to the problem that i have explored i do not know of any problem that would have been solved by having direct access to either syntax or semanticsto the contrary i have tried to find the simplest algorithm capable of dealing with the facts as we know themthe problem of determining whether two distinct signatures derive from a single larger paradigm would be simplified with such knowledge but that is the exception and not the ruleso in the end i think that the hypothesis that the child uses an mdllike analysis has a good deal going for itin any event it is far from clear to me how one could use information either grammatical or contextual to elucidate the problem of the discovery of morphemes without recourse to notions along the lines of those used in the present algorithmof course in all likelihood the task of the present algorithm is not the same as the language learner task it seems unlikely that the child first determines what the words are in the language and then infers the morphemesthe more general problem of language acquisition is one that includes the problems of identifying morphemes of identifying words both morphologically analyzed and nonanalyzed of identifying syntactic categories of the words in question and of inferring the rules guiding the distribution of such syntactic categoriesit seems to me that the only manageable kind of approach to dealing with such a complex task is to view it as an optimization problem of which mdl is one particular stylechomsky early conception of generative grammar was developed along these lines as well his notion of an evaluation metric for grammars was equivalent in its essential purpose to the description length of the morphology utilized in the present paperthe primary difference between the lslt approach and the mdl approach is this the lslt approach conjectured that the grammar of a language could be factored into two parts one universal and one languageparticular and when we look for the simplest grammatical description of a given corpus it is only the languageparticular part of the description that contributes to complexitythat is what the theory stipulatesby contrast the mdl approach makes minimal universal assumptions and so the complexity of everything comprising the description of the corpus must be counted in determining the complexity of the descriptionthe difference between these hypotheses vanishes asymptotically as the size of the language increases or to put it another way strong chomskian rationalism is indistinguishable from pure empiricism as the information content of the mdlinduced grammar increases in size relative to the information content of ugrephrasing that slightly the significance of chomskianstyle rationalism is greater the simpler languageparticular grammars are and it is less significant as languageparticular grammars grow larger and in the limit as the size of grammars grows asymptotically traditional generative grammar is indistinguishable from mdlstyle rationalismwe return to this point belowthere is a striking point that has so far remained tacit regarding the treatment of this problem in contemporary linguistic theorythat point is this the problem addressed in this paper is not mentioned not defined and not addressedthe problem of dividing up words into morphemes is generally taken as one that is so trivial and goldsmith unsupervised learning of the morphology of a natural language devoid of interest that morphologists or linguists more generally simply do not feel obliged to think about the problem34 in a very uninteresting sense the challenge presented by the present paper to current morphological theory is no challenge at all because morphological theory makes no claims to knowing how to discover morphological analysis it claims only to know what to do once the morphemes have been identifiedthe early generative grammar view as explored in lslt posits a grammar of possible grammars that is a format in which the rules of the morphology and syntax must be written and it establishes the semantics of these rules which is to say how they functionthis grammar of grammars is called variously universal grammar or linguistic theory and it is generally assumed to be accessible to humans on the basis of an innate endowment though one need not buy into that assumption to accept the rest of the theoryin syntactic structures chomsky famously argued that the goal of a linguistic theory that produces a grammar automatically given a corpus as input is far too demanding a goalhis own theory cannot do that and he suggests that no one else has any idea how to accomplish the taskhe suggests furthermore that the next weaker positionthat of developing a linguistic theory that could determine given the data and the account whether this was the best grammarwas still significantly past our theoretical reach and he suggests finally that the next weaker position is a not unreasonable one to expect of linguistic theory that it be able to pass judgment on which of two grammars is superior with respect to a given corpusthat position is of course exactly the position taken by the mdl framework which offers no help in coming up with analyses but which is excellent at judging the relative merits of two analyses of a single corpus of datain this paper we have seen this point throughout for we have carefully distinguished between heuristics which propose possible analyses and modifications of analyses on the one hand and the mdl measurement which makes the final judgment call deciding whether to accept a modification proposed by the heuristics on the otheron so much the early generative grammar of lslt and mdl agreebut they disagree with regard to two points and on these points mdl makes clearer more explicit claims and both claims appear to be strongly supported by the present studythe two points are these the generative view is that there is inevitably an idiosyncratic character to universal grammar that amounts to a substantive innate capacity on the grounds that the task of discovering the correct grammar of a human language given only the corpus available to the child is insurmountable because this corpus is not sufficient to home in on the correct grammarthe research strategy associated with this position is to hypothesize certain compression techniques that lead to significant reduction in the size of the grammars of a number of natural languages compared to what would have been possible without themsequential rule ordering is one such suggestion discussed at length in lsltto reformulate this in a fashion that allows us to make a clearer comparison with mdl we may formulate early generative grammar in the following way to select the correct universal grammar out of a set of proposed universal grammars ug given corpora for a range of human languages select that ug for which the sum of the sizes of the grammars for all of the corpora is the smallestit does not followit need not be the casethat the grammar of english selected by the winning ug is the shortest one of all the candidate english grammars but the winning ug is allround the supplier of the shortest grammars around the worldm mdl could be formulated in those terms undoubtedly but it also can be formulated in a languageparticular fashion which is how it has been used in this papergenerative grammar is inherently universalist it has no languageparticular format other than to say that the best grammar for a given language is the shortest grammarbut we know that such a position is untenable and it is precisely out of that knowledge that mdl was bornthe position is untenable because we can always make an arbitrarily small compression of a given set of data if we are allowed to make the grammar arbitrarily complex to match and potentially to overfit the data and it is untenable because generative grammar offers no explicit notion of how well a grammar must match the training datamdl insight is that it is possible to make explicit the tradeoff between complexity of the analysis and snugness of fit to the datacorpus in questionthe first tool in that computational tradeoff is the use of a probabilistic model to compress the data using stock tools of classical information theorythese notions were rejected as irrelevant by early workers in early generative grammar notions of probabilistic grammar due to solomonoff were not integrated into that framework and the possibility of using them to quantify the goodness of fit of a grammar to a corpus was not exploitedit seems to me that it is in this context that we can best understand the way in which traditional generative grammar and contemporary probabilistic grammar formalism can be understood as complementing each otheri at least take it in that way and this paper is offered in that spiritsince what we are really interested in computing is not the minimum description length as such but rather the difference between the description length of one model and that of a variant it is convenient to consider the general form of the difference between two mdl computationsin general let us say we will compare two analyses si and s2 for the same corpus where s2 typically contains some item that si does not let us write out the difference in length between these two analyses as in calculating the length of si minus the length of s2the general formulas derived in are not of direct computational interest they serve rather as a template that can be filled in to compute the change in description length occasioned by a particular structural change in the morphology proposed by a particular heuristicthis template is rather complex in its most general form but it simplifies considerably in any specific applicationthe heuristic determines which of the terms in these formulas take on nonzero values and what their values are the overall formula determines whether the change in question improves the description lengthin addition we may regard the formulas in 35 as the discussion in the text may suggest i am skeptical of the generative position and i would like to identify what empirical result would confirm the generative position and dissolve my skepticismthe result would be the discovery of two grammars of english g1 and g2 with the following properties g1 is inherently simpler than g2 using some appropriate notion of turing machine program complexity and yet g2 is the correct grammar of english based on some of the complexity of g2 being the responsibility of linguistic theory hence quotfreequot in the complexity competition between g1 and g2that is the proponent of the generative view must be willing to acknowledge that overall complexity of the grammar of a language may be greater than logically necessary due to evolution investment in one particular style of programming languagegoldsmith unsupervised learning of the morphology of a natural language as offering us an exact and explicit statement of how a morphology can be improvedthe notation can be considerably simplified if we take some care in advancenote first that in and below several items are subscripted to indicate whether they should be counted as in si or s2much of the simplification comes from observing first that second that this difference is generally computed inside a summation over a set of morphemes and hence the first term simplifies to a constant times the type count of the morphemes in the set in questionindeed so prevalent in these calculations is the formula where the numerator is a count in si and the denominator a count of the same variable in s2 if no confusion would result we write ax36 let us review the terms listed in w is a measure of the change in the number of total words due to the proposed modification an increase in the total number of words results in a slightly negative valuein the text above i indicated that we could by judicious choice of word count distribution keep wi w2 i have included the more general case in where the two may be differentins and wc are similar measures in the change of words that have morphologically simple and morphologically complex stems respectivelythey measure the global effects of the typically small changes brought about by a hypothetical change in morphological modelin the derivation of each formula we consider first the case of those morphemes that are found in both si and s2 followed by those found only in si and then those only found in s2 si 52recall that angle brackets are used to indicate the type count of a set the number of typographically distinct members of a setin we derive a formula for the change in length of the suffix component of the morphologyobserve the final formulation in which the first two terms involve suffixes present in both si and s2 while the third term involves suffixes present only in si and the fourth term involves suffixes present only in s2this format will appear in all of the components of this computationrecall that the function ltypo specifies the length of a string in bits which we may take here to be simply log times the number of characters in the stringin we derive the corresponding formula for the stem componentthe general form of the computation of the change to the signature component is more complicated and this complexity motivates a little bit more notation to simplify itfirst we can compute the change in the pointers to the signatures and the information that each signature contains regarding the count of its stems and suffixes 36 we beg the reader indulgence in recognizing that we prepend the operator a immediately to the left of the name of a set to indicate the change in the size of the counts of the set which is to say quotawquot is shorthand for quotaquot and quotaquot for quotaquot as in but the heart of the matter is the treatment of the stems and suffixes within the signatures given in bear in mind first of all that each signature consists of a list of pointers to stems and a list of pointers to suffixesthe treatment of suffixes is given in and is relatively straightforward but the treatment of stems is a bit more complexrecall that all items on the stem list will be pointed to by exactly one stem pointer located in some particular signatureall stem pointers in a signature that point to stems on the suffix list are directly described a quotsimplequot word a notion we have already encountered a word whose stem is not further analyzablebut other words may be complex that is may contain a stem whose pointer is to an analyzable word and hence the stem representation consists of a pointer triple a pointer to a signature a stem within the signature and a suffix within the signatureand each stem pointer is preceded by a flag indicating which type of stem it iswe thus have three things whose difference in the two states s1 and s2 we wish to computethe difference of the lengths of the flag is given in in we need change in the total length of the pointers to the stems and this has actually already been computed during the computation of 37 finally in the set of pointers from certain stem positions to words consists of pointers to all of the words that we have already labeled as being in wc and we can compute the length of these pointers by adding counts to these words the length of the pointers to these words needs to be computed anyway in determining the compressed length of the corpusthis completes the computations needed to compare two states of the morphologyin addition we must compute the difference in the compressed length of the corpus in the two states and this is given in 37 the equivalence between the number computed in and the number needed here is not exactly fortuitous but it is not an error eitherthe figure computed in describes an aspect of the complexity of the morphology as a whole whereas the computation described here in the text is what it is because we have made the assumption that each stem occurs in exactly one signaturethat assumption is not strictly speaking correct in natural language we could well imagine an analysis that permitted the same stem to appear in several distinction signatures and in that case the computation here would not reduce to but the assumption made in the text is entirely reasonable and simplifies the construction for us
J01-2001
unsupervised learning of the morphology of a natural languagethis study reports the results of using minimum description length analysis to model unsupervised learning of the morphological segmentation of european languages using corpora ranging in size from 5000 words to 500000 wordswe develop a set of heuristics that rapidly develop a probabilistic morphological grammar and use mdl as our primary tool to determine whether the modifications proposed by the heuristics will be adopted or notthe resulting grammar matches well the analysis that would be developed by a human morphologistin the final section we discuss the relationship of this style of mdl grammatical analysis to the notion of evaluation metric in early generative grammarwe propose a recursive structure such that stems can consist of a substem and a suffixwe use a morphological representation based on signatures which are sets of affixes that represent a family of words sharing an inflectional or derivational morphologywe observe that less frequent and shorter affixes are more likely to be erroneous
improving accuracy in word class tagging through the combination of machine learning systems we examine how differences in language models learned by different datadriven systems performing the same nlp task can be exploited to yield a higher accuracy than the best individual system we do this by means of experiments involving the task of morphosyntactic word class tagging on the basis of three different tagged corpora four wellknown tagger generators are trained on the same corpus data after comparison their outputs are combined using several voting strategies and secondstage classifiers all combination taggers outperform their best component the reduction in error rate varies with the material in question but can be as high as 243 with the lob corpus we examine how differences in language models learned by different datadriven systems performing the same nlp task can be exploited to yield a higher accuracy than the best individual systemwe do this by means of experiments involving the task of morphosyntactic word class tagging on the basis of three different tagged corporafour wellknown tagger generators are trained on the same corpus dataafter comparison their outputs are combined using several voting strategies and secondstage classifiersall combination taggers outperform their best componentthe reduction in error rate varies with the material in question but can be as high as 243 with the lob corpusin all natural language processing systems we find one or more language models that are used to predict classify or interpret languagerelated observationsbecause most realworld nlp tasks require something that approaches full language understanding in order to be perfect but automatic systems only have access to limited information as well as limited resources for reasoning with that information such language models tend to make errors when the system is tested on new materialthe engineering task in nlp is to design systems that make as few errors as possible with as little effort as possiblecommon ways to reduce the error rate are to devise better representations of the problem to spend more time on encoding language knowledge or to find more training data however given limited resources these options are not always availablerather than devising a new representation for our task in this paper we combine different systems employing known representationsthe observation that suggests this approach is that systems that are designed differently either because they use a different formalism or because they contain different knowledge will typically produce different errorswe hope to make use of this fact and reduce the number of errors with very little additional effort by exploiting the disagreement between different language modelsalthough the approach is applicable to any type of language model we focus on the case of statistical disambiguators that are trained on annotated corporathe examples of the task that are present in the corpus and its annotation are fed into a learning algorithm which induces a model of the desired inputoutput mapping in the form of a classifierwe use a number of different learning algorithms simultaneously on the same training corpuseach type of learning method brings its own quotinductive biasquot to the task and will produce a classifier with slightly different characteristics so that different methods will tend to produce different errorswe investigate two ways of exploiting these differencesfirst we make use of the gang effectsimply by using more than one classifier and voting between their outputs we expect to eliminate the quirks and hence errors that are due to the bias of one particular learnerhowever there is also a way to make better use of the differences we can create an arbiter effectwe can train a secondlevel classifier to select its output on the basis of the patterns of cooccurrence of the outputs of the various classifiersin this way we not only counter the bias of each component but actually exploit it in the identification of the correct outputthis method even admits the possibility of correcting collective errorsthe hypothesis is that both types of approaches can yield a more accurate model from the same training data than the most accurate component of the combination and that given enough training data the arbiter type of method will be able to outperform the gang typein the machine learning literature there has been much interest recently in the theoretical aspects of classifier combination both of the gang effect type and of the arbiter type in general it has been shown that when the errors are uncorrelated to a sufficient degree the resulting combined classifier will often perform better than any of the individual systemsin this paper we wish to take a more empirical approach and examine whether these methods result in substantial accuracy improvements in a situation typical for statistical nlp namely learning morphosyntactic word class tagging from an annotated corpus of several hundred thousand wordsmorphosyntactic word class tagging entails the classification of each token of a natural language text in terms of an element of a finite palette of word class descriptors the reasons for this choice of task are severalfirst of all tagging is a widely researched and wellunderstood task second current performance levels on this task still leave room for improvement quotstateoftheartquot performance for datadriven automatic word class taggers on the usual type of material is at 9697 correctly tagged words but accuracy levels for specific classes of ambiguous words are much lowerfinally a number of rather different methods that automatically generate a fully functional tagging system from annotated text are available offtheshelffirst experiments demonstrated the basic validity of the approach for tagging with the error rate of the best combiner being 191 lower than that of the best individual tagger however these experiments were restricted to a single language a single tagset and more importantly a limited amount of training data for the combinersthis led us to perform further more extensive tagging experiments before moving on to other taskssince then the method has also been applied to other nlp tasks with good results in the remaining sections we first introduce classifier combination on the basis of previous work in the machine learning literature and present the combination methods we use in our experiments then we explain our experimental setup also describing the corpora and tagger generators used in the experimentsin section 4 we go on to report the overall results of the experiments starting with a comparison between the component taggers and continuing with a comparison of the combination methodsthe results are examined in more detail in section 5 where we discuss such aspects as accuracy on specific words or tags the influence of inconsistent training data training set size the contribution of individual component taggers and tagset granularityin section 6 we discuss the results in the light of related work after which we conclude with a summary of the most important observations and interesting directions for future researchin recent years there has been an explosion of research in machine learning on finding ways to improve the accuracy of supervised classifier learning methodsan important finding is that a set of classifiers whose individual decisions are combined in some way can be more accurate than any of its component classifiers if the errors of the individual classifiers are sufficiently uncorrelated there are several ways in which an ensemble can be created both in the selection of the individual classifiers and in the way they are combinedone way to create multiple classifiers is to use subsamples of the training examplesin bagging the training set for each individual classifier is created by randomly drawing training examples with replacement from the initial training set in boosting the errors made by a classifier learned from a training set are used to construct a new training set in which the misclassified examples get more weightby sequentially performing this operation an ensemble is constructed this class of methods is also called arcing in general boosting obtains better results than bagging except when the data is noisy another way to create multiple classifiers is to train classifiers on different sources of information about the task by giving them access to different subsets of the available input features still other ways are to represent the output classes as bit strings where each bit is predicted by a different component classifier or to develop learningmethodspecific methods for ensuring variation in the way the different classifiers of an ensemble are constructed in this paper we take a multistrategy approach in which an ensemble is constructed by classifiers resulting from training different learning methods on the same data methods to combine the outputs of component classifiers in an ensemble include simple voting where each component classifier gets an equal vote and weighted voting in which each component classifier vote is weighted by its accuracy more sophisticated weighting methods have been designed as wellali and pazzani apply the naive bayes algorithm to learn weights for classifiersvoting methods lead to the gang effect discussed earlierthe let t be the component taggers si the most probable tag for a token tok as suggested by t and let the quality of tagger t be measured by simple algorithms for voting between component taggers most interesting approach to combination is stacking in which a classifier is trained to predict the correct output class when given as input the outputs of the ensemble classifiers and possibly additional information stacking can lead to an arbiter effectin this paper we compare voting and stacking approaches on the tagging problemin the remainder of this section we describe the combination methods we use in our experimentswe start with variations based on weighted votingthen we go on to several types of stacked classifiers which model the disagreement situations observed in the training data in more detailthe input to the secondstage classifier can be limited to the firstlevel outputs or can contain additional information from the original input patternwe will consider a number of different secondlevel learnersapart from using three wellknown machine learning methods memorybased learning maximum entropy and decision trees we also introduce a new method based on grouped votingthe most straightforward method to combine the results of multiple taggers is to do an nway voteeach tagger is allowed to vote for the tag of its choice and the tag with the highest number of votes is selectedthe question is how large a vote we allow each tagger the most democratic option is to give each tagger one vote this does not require any tuning of the voting mechanism on training datahowever the component taggers can be distinguished by several figures of merit and it appears more useful to give more weight to taggers that have proved their qualityfor this purpose we use precision and recall two wellknown measures which can be applied to the evaluation of tagger output as wellfor any tag x precision measures which percentage of the tokens tagged x by the tagger are also tagged x in the benchmarkrecall measures which percentage of the tokens tagged x in the benchmark are also tagged x by the taggerwhen abstracting away from individual tags precision and recall are equal and measure how many tokens are tagged correctly in this case we also use the more generic term accuracywe will call the voting method where each tagger is weighted by its general quality totprecision ie each tagger votes its overall precisionto allow for more detailed interactions each tagger is weighted by the quality in relation to the current situation ie each tagger votes its precision on the tag it suggests this way taggers that are accurate for a particular type of ambiguity can act as specialized expertsthe information about each tagger quality is derived from a crossvalidation of its results on the combiner training setthe precise setup for deriving the training data is described in more detail below in section 3we have access to even more information on how well the taggers performwe not only know whether we should believe what they propose but know as well how often they fail to recognize the correct tag this information can be used by forcing each tagger to add to the vote for tags suggested by the opposition too by an amount equal to 1 minus the recall on the opposing tag as an example suppose that the mxpost tagger suggests dt and the hmm tagger tnt suggests cs if mxpost has a precision on dt of 09658 and a recall on cs of 08927 and tnt has a precision on cs of 09044 and a recall on dt of 09767 then dt receives a 09658 00233 09991 vote and cs a 09044 01073 10117 votenote that simple voting combiners can never return a tag that was not suggested by a majority of the component taggersas a result they are restricted to the combination of taggers that all use the same tagsetthis is not the case for all the following combination methods a fact which we have recently exploited in bootstrapping a word class tagger for a new corpus from existing taggers with completely different tagsets one of the best methods for tagger combination in is the tagpair methodit looks at all situations where one tagger suggests tagi and the other tag2 and estimates the probability that in this situation the tag should actually be tagalthough it is presented as a variant of voting in that paper it is in fact also a stacked classifier because it does not necessarily select one of the tags suggested by the component taggerstaking the same example as in the voting section above if tagger mxpost suggests dt and tagger tnt suggests cs we find that the probabilities for the appropriate tag are cs subordinating conjunction 04623 cs22 second half of a twotoken subordinating conjunction eg so that 00171 dt determiner 04966 ql quantifier 00103 wpr whpronoun 00137 when combining the taggers every tagger pair is taken in turn and allowed to vote as described above for each possible tag if a tag pair tag1tag2 has never been observed in the training data we fall back on information on the individual taggers ie p let ti be the component taggers and s the most probable tag for a token tok as suggested by tithen the vote v for tagging token tok with tag tag is given by the tagpair algorithm for voting between component taggersif the case to be classified corresponds to the featurevalue pair set with the weight wfsub for an fsub containing n elements equal to wn where wm is a normalizing the weighted probability distribution voting classification algorithm as used in the combination experiments and pnote that with this method a tag suggested by a minority of the taggers actually has a chance to win although in practice the chance to beat a majority is still very slightseeing the success of tagpair in the earlier experiments we decided to try to generalize this stacked probabilistic voting approach to combinations larger than pairsamong other things this would let us include word and context features here as wellthe method that was eventually developed we have called weighted probability distribution voting a wpdv classification model is not limited to pairs of features but can use the probability distributions for all feature combinations observed in the training data during voting we do not use a fallback strategy but use weights to prevent the lowerorder combinations from excessively influencing the final results when a higherorder combination is presentthe original system as used for this paper weights a combination of order n with a factor n a number based on the observation that a combination of order m contains m combinations of order that have to be competed withits only parameter is a threshold for the number of times a combination must be observed in the training data in order to be used which helps prevent a combinatorial explosion when there are too many atomic featuresin contrast to voting stacking classifiers allows the combination of the outputs of component systems with additional information about the decision contextwe investigated several versions of this approachin the basic version each training case for the secondlevel learner consists of the tags suggested by the component taggers and the correct tag in the more advanced versions we add information about the word in question and the tags suggested by all taggers for the previous and the next position these types of extended secondlevel features can be exploited by wpdv as well as by a wide selection of other machine learning algorithmsour first choice from these other algorithms is a memorybased secondlevel learner implemented in timbl a package developed at tilburg university and antwerp universitymemorybased learning is a learning method that is based on storing all examples of a task in memory and then classifying new examples by similaritybased reasoning from these stored exampleseach example is represented by a fixedlength vector of feature values called a caseif the case to be classified has been observed before that is if it is found among the stored cases the most frequent corresponding output is usedif the case is not found in the case base k nearest neighbors are determined with some similarity metric and the output is based on the observed outputs for those neighborsboth the value of k and the similarity metric used can be selected by parameters of the systemfor the tags version the similarity metric used is overlap and k is kept at 1for the other two versions a value of k 3 is used and each overlapping feature is weighted by its information gain the information gain of a feature is defined as the difference between the entropy of the a priori class distribution and the conditional entropy of the classes given the value of the featurethe second machine learning method maximum entropy modeling implemented in the maccent system does the classification task by selecting the most probable class given a maximum entropy mode16 this type of model represents examples of the task as sets of binary indicator features for the task at hand conjunctions of a particular tag and a particular set of feature valuesthe model has the form of an exponential model where i indexes all the binary features fi is a binary indicator function for feature i za is a normalizing constant and ai is a weight for feature ithe model is trained by iteratively adding binary features with the largest gain in the probability of the training data and estimating the weights using a numerical optimization method called improved iterative scalingthe model is constrained by the observed distribution of the features in the training data and has the property of having the maximum entropy of all models that fit the constraints ie all distributions that are not directly constrained by the data are left as uniform as possiblethe maximum entropy combiner takes the same information as the memorybased learner as input but internally translates all multivalued features to binary indicator functionsthe improved iterative scaling algorithm is then applied with a maximum of one hundred iterationsthis algorithm is the same as the one used in the mxpost tagger described in section 323 but without the beam search used in the tagging applicationthe third machine learning method we used is c50 an example of topdown induction of decision treesa decision tree is constructed by recursively partitioning the training set selecting at each step the feature that most reduces the uncertainty about the class in each partition and using it as a split c50 uses gain ratio as an estimate of the utility of splitting on a featuregain ratio corresponds to the information gain measure of a feature as described above except that the measure is normalized for the number of values of the feature by dividing by the entropy of the feature valuesafter the decision tree is constructed it is pruned to avoid overfitting using a method described in detail in quinlan a classification for a test case is made by traversing the tree until either a leaf node is found or all further branches do not match the test case and returning the most frequent class at the last nodethe case representation uses exactly the same features as the memorybased learnerin order to test the potential of system combination we obviously need systems to combine ie a number of different taggersas we are primarily interested in the combination of classifiers trained on the same data sets we are in fact looking for data sets and systems that can automatically generate a tagger on the basis of those data setsfor the current experiments we have selected three tagged corpora and four tagger generatorsbefore giving a detailed description of each of these we first describe how the ingredients are used in the experimentseach corpus is used in the same way to test tagger and combiner performancefirst of all it is split into a 90 training set and a 10 test setwe can evaluate the base taggers by using the whole training set to train the tagger generators and the test set to test the resulting taggerfor the combiners a more complex strategy must be followed since combiner training must be done on material unseen by the base taggers involvedrather than setting apart a fixed combiner training set we use a ninefold training strategy9 the 90 training set is split into nine equal partseach part is tagged with component taggers that have been trained on the other eight partsall results are then concatenated for use in combiner training so that in contrast to our earlier work all of the training set is effectively available for the training of the combinerfinally the resulting combiners are tested on the test setsince the test set is identical for all methods we can compute the statistical significance of the results using mcnemar chisquared test as we will see the increase in combiner training set size indeed results in better performanceon the other hand the increased amount of data also increases time and space requirements for some systems to such a degree that we had to exclude them from the experimentsthe data in the training set is the only information used in tagger and combiner construction all components of all taggers and combiners are entirely data driven and no manual adjustments are madeif any tagger or combiner construction method is parametrized we use default settings where availableif there is no default we choose intuitively appropriate values without preliminary testingin these cases we report such parameter settings in the introduction to the systemin the current experiments we make use of three corporathe first is the lob corpus which we used in the earlier experiments as well and which has proved to be a good testing groundwe then switch to wall street journal material tagged with the penn treebank ii tagset like lob it consists of approximately 1m words but unlike lob it is american englishfurthermore it is of a different structure and tagged with a rather different tagsetthe experiments with wsj will also let us compare our results with those reported by brill and wu which show a much less pronounced accuracy increase than ours with lobthe final corpus is the slightly smaller eindhoven corpus tagged with the wotan tagset this will let us examine the tagging of a language other than english furthermore the wotan tagset is a very detailed one so that the error rate of the individual taggers tends to be highermoreover we can more easily use projections of the tagset and thus study the effects of levels of granularity311 lobthe first data set we use for our experiments consists of the tagged lancasteroslobergen corpus the corpus comprises about one million words of british english text divided over 500 samples of 2000 words from 15 text typesthe tagging of the lob corpus which was manually checked and corrected is generally accepted to be quite accuratehere we use a slight adaptation of the tagsetthe changes are mainly cosmetic eg nonalphabetic characters such as quotquot in tag names have been replacedhowever there has also been some retokenization genitive markers have been split off and the negative marker nt has been reattachedan example sentence tagged with the resulting tagset is the tagset consists of 170 different tags and has an average ambiguity of 282 tags per wordform over the corpusan impression of the difficulty of the tagging task can be gained from the two baseline measurements in table 2 representing a completely random choice from the potential tags for each token and selection of the lexically most likely tag 11 the trainingtest separation of the corpus is done at utterance boundaries and leads to a 1046k token training set and a 115k token test setaround 214 of the test set are tokens unseen in the training set and a further 037 are known tokens but with unseen tags312 wsjthe second data set consists of 1m words of wall street journal materialit differs from lob in that it is american english and more importantly in that it is completely made up of newspaper textthe material is tagged with the penn treebank tagset which is much smaller than the lob oneit consists of only 48 tagsthere is no attempt to annotate compound words so there are no ditto tags10 ditto tags are used for the components of multitoken units eg if as well as is taken to be a coordinating conjunction it is tagged quotas_cc1 well_cc2 as_cc3quot using three related but different ditto tags11 these numbers are calculated on the basis of a lexicon derived from the whole corpusan actual tagger will have to deal with unknown words in the test set which will tend to increase the ambiguity and decrease random and lexprobnote that all actual taggers and combiners in this paper do have to cope with unknown words as their lexicons are based purely on their training sets12 because of the way in which the tagger generators treat their input we do count tokens as different even though they are the same underlying token but differ in capitalization of one or more characters13 in the material we have available quotes are represented slightly differently so that there are only 45 different tagsin addition the corpus contains a limited number of instances of 38 quotindeterminatequot tags eg jjivbd indicates a choice between adjective and past participle which cannot be decided or about which the annotator was unsuremostly because of the less detailed tagset the average ambiguity of the tags is lower than lob at 234 tags per token in the corpusthis means that the tagging task should be an easier one than that for lobthis is supported by the values for random and lexprob in table 2on the other hand the less detailed tagset also means that the taggers have less detailed information to base their decisions onanother factor that influences the quality of automatic tagging is the consistency of the tagging over the corpusthe wsj material has not been checked as extensively as the lob corpus and is expected to have a much lower consistency level the trainingtest separation of the corpus is again done at utterance boundaries and leads to a 1160k token training set and a 129k token test setaround 186 of the test set are unseen tokens and a further 044 are known tokens with previously unseen tags313 eindhoventhe final two data sets are both based on the eindhoven corpus this is slightly smaller than lob and wsjthe written part which we use in our experiments consists of about 750k words in samples ranging from 53 to 451 wordsin variety it lies between lob and wsj containing 150k words each of samples from dutch newspapers weeklies magazines popular scientific writings and novels the tagging of the corpus as used here was created in 1994 as part of a master thesis project it employs the wotan tagset for dutch newly designed during the projectit is based on the classification used in the most popular descriptive grammar of dutch the algemene nederlandse spraakkunst the actual distinctions encoded in the tagset were selected on the basis of their importance to the potential users as estimated from a number of indepth interviews with interested parties in the netherlandsthe wotan tagset is not only very large but furthermore contains distinctions that are very difficult for automatic taggers such as verb transitivity syntactic use of adjectives and the recognition of multitoken unitsit has an average ambiguity of 746 tags per token in the corpusfor our experiments we also designed a simplification of the tagset dubbed wotanlite which no longer contains the most problematic distinctionswotanlite has 129 tags and an average ambiguity of 346 tags per tokenan example of wotan tagging is given below quot first part of singular neutral case proper noun second part of singular neutral case proper noun the annotation of the corpus was realized by a semiautomatic upgrade of the tagging inherited from an earlier projectthe resulting consistency has never been exhaustively measured for either the wotan or the original taggingthe trainingtest separation of the corpus is done at sample boundaries this is a much stricter separation than applied for lob and wsj as for those two corpora our test utterances are related to the training ones by being in the same samplespartly as a result of this but also very much because of word compounding in dutch we see a much higher percentage of new tokens624 tokens unseen in the training seta further 145 known tokens have new tags for wotan and 045 for wotanlitethe training set consists of 640k tokens and the test set of 72k tokensthe second ingredient for our experiments is a set of four tagger generator systems selected on the basis of variety and availabilityeach of the systems represents a the features available to the four taggers in our studyexcept for mxpost all systems use different models for known and unknown wordshowever brill transformationbased learning system applies its two models in sequence when faced with unknown words thus giving the unknownword tagger access to the features used by the knownword model as wellthe first five columns in the table show features of the focus word capitalization hyphen or digit present and number of suffix or prefix letters of the wordbrill tbl system also takes into account whether the addition or deletion of a suffix results in a known lexicon entry the next three columns represent access to the actual word and any range of words to the left or right the last three columns show access to tag information for the word itself and any range of words left or right note that the expressive power of a method is not purely determined by the features it has access to but also by its algorithm and what combinations of the available features this allows it to consider popular type of learning method each uses slightly different features of the text and each has a completely different representation for its language modelall publicly available systems are used with the default settings that are suggested in their documentation321 errordriven transformationbased learningthis learning method finds a set of rules that transforms the corpus from a baseline annotation so as to minimize the number of errors a tagger generator using this learning method is described in brill the implementation that we use is eric brill publicly available set of c programs and ped scriptswhen training this system starts with a baseline corpus annotation aoin ao each known word is tagged with its most likely tag in the training set and each unknown word is tagged as a noun the system then searches through a space of transformation rules in order to reduce the discrepancy between its current annotation and the provided correct onethere are separate templates for known words and for unknown words the exact features used by this tagger are shown in table 1the learner for the unknown words is trained and applied firstbased on its output the rules for context disambiguation are learnedin each learning step all instantiations of the rule templates that are present in the corpus are generated and receive a scorethe rule that corrects the highest number of errors at step n is selected and applied to the corpus to yield an annotation an which is then used as the basis for step n 1the process stops when no rule reaches a score above a predefined thresholdin our experiments this has usually yielded several hundreds of rulesof the four systems tbl has access to the most features contextual information as well as lexical information however the conjunctions of these features are not all available in order to keep the search space manageableeven with this restriction the search is computationally very costlythe most important rule templates are of the form if context x change tag to tag where context is some condition on the tags of the neighbouring wordshence learning speed is roughly cubic in the tagset size17 when tagging the system again starts with a baseline annotation for the new text and then applies all rules that were derived during training in the sequence in which they were derivedthis means that application of the rules is fully deterministiccorpus statistics have been at the basis of selecting the rule sequence but the resulting tagger does not explicitly use a probabilistic model322 memorybased learninganother learning method that does not explicitly manipulate probabilities is machinebased learninghowever rather than extracting a concise set of rules memorybased learning focuses on storing all examples of a task in memory in an efficient way new examples are then classified by similaritybased reasoning from these stored examplesa tagger using this learning method mbt was proposed by daelemans et al 18 during the training phase the training corpus is transformed into two case bases one which is to be used for known words and one for unknown wordsthe cases are stored in an igtree and during tagging new cases are classified by matching cases with those in memory going from the most important feature to the least importantthe order of feature relevance is determined by information gainfor known words the system used here has access to information about the focus word and its potential tags the disambiguated tags in the two preceding positions and the undisambiguated tags in the two following positionsfor unknown words only one preceding and following position three suffix letters and information about capitalization and presence of a hyphen or a digit are used as featuresthe case base for unknown words is constructed from only those words in the training set that occur five times or less tropy modeling a maximum entropy tagger called mxpost was developed by ratnaparkhi 19 this system uses a number of word and context features rather similar to system mbt and trains a maximum entropy model using the improved iterative scaling algorithm for one hundred iterationsthe final model has a weighting parameter for each feature value that is relevant to the estimation of the probability p and combines the evidence from diverse features in an explicit probability modelin contrast to the other taggers both known and unknown words are processed by the same van halteren zavrel and daelemans combination of machine learning systems modelanother striking difference is that this tagger does not have a separate storage mechanism for lexical information about the focus word the word is merely another feature in the probability modelas a result no generalizations over groups of words with the same set of potential tags are possiblein the tagging phase a beam search is used to find the highest probability tag sequence for the whole sentence324 hidden markov modelsin a hidden markov model the tagging task is viewed as finding the maximum probability sequence of states in a stochastic finitestate machinethe transitions between states emit the words of a sentence with a probability p the states st themselves model tags or sequences of tagsthe transitions are controlled by markovian state transition probabilities pbecause a sentence could have been generated by a number of different state sequences the states are considered to be quothiddenquot although methods for unsupervised training of hmm do exist training is usually done in a supervised way by estimation of the above probabilities from relative frequencies in the training datathe hmm approach to tagging is by far the most studied and applied in van halteren zavrel and daelemans we used a straightforward implementation of hmm which turned out to have the worst accuracy of the four competing methodsin the present work we have replaced this by the tnt system tnt is a trigram tagger which means that it considers the previous two tags as features for deciding on the current tagmoreover it considers the capitalization of the previous word as well in its state representationthe lexical probabilities depend on the identity of the current word for known words and on a suffix tree smoothed with successive abstraction for guessing the tags of unknown wordsas we will see below it shows a surprisingly higher accuracy than our previous hmm implementationwhen we compare it with the other taggers used in this paper we see that a trigram hmm tagger uses a very limited set of features on the other hand it is able to access some information about the rest of the sentence indirectly through its use of the viterbi algorithmthe first set of results from our experiments is the measurement of overall accuracy for the base taggersin addition we can observe the agreement between the systems from which we can estimate how much gain we can possibly expect from combinationthe application of the various combination systems finally shows us how much of the projected gain is actually realizedan additional benefit of training four popular tagging systems under controlled conditions on several corpora is an experimental comparison of their accuracytable 2 lists the accuracies as measured on the test setwe see that tbl achieves the lowest accuracy on all data setsmbt is always better than tbl but is outperformed by both mxp and hmmon two data sets the hidden markov model system is better than the maximum entropy system on the other two mxpost is the better systemin all cases except the difference between mxp and hmm on lob the differences are statistically significant we can also see from these results that wsj although it is about the same size as lob and has a smaller tagset has a higher difficulty level than lobwe suspect that an important reason for this is the inconsistency in the wsj annotation we examine this effect in more detail belowthe eindhoven corpus both with wotan and wotanlite tagsets is yet more difficult but here the difficulty lies mainly in the complexity of the tagset and the large percentage of unknown words in the test setswe see that the reduction in the complexity of the tagset from wotan to wotanlite leads to an enormous improvement in accuracythis granularity effect is also examined in more detail belowon the basis of the output of the single taggers we can also examine the feasibility of combination as combination is dependent on different systems producing different errorsas expected a large part of the errors are indeed uncorrelated the agreement between the systems is at about the same level as their agreement with the benchmark tagginga more detailed view of intertagger agreement is shown in table 4 which lists the patterns of agreement for the four data setsit is interesting to see that although the general accuracy for wsj is lower than for lob the intertagger agreement for wsj is on average higherit would seem that the less consistent tagging for wsj makes it easier for all systems to fall into the same trapsthis becomes even clearer when we examine the patterns of agreement and see for example that the number of tokens where all taggers agree on a wrong tag is practically doubledthe agreement pattern distribution enables us to determine levels of combination qualitytable 5 lists both the accuracies of several ideal combiners 0 and the error reduction in relation to the best base tagger for the data set in question 22 for example on lob quotall ties correctquot produces 1941 errors which is 313 less than hmm 2824 errorsa minimal level of combination achievement is that a majority or better will lead to the correct tag and that ties are handled appropriately about 50 of the time for the pattern and 25 for the pattern pattern for wotanin more optimistic scenarios a combiner is able to select the correct tag in all tied cases or even in cases where a two or threetagger majority must be overcomealthough the possibility of overcoming a majority is present with the arbiter type combiners the situation is rather improbableas a result we ought to be more than satisfied if any combiners approach the level corresponding to the projected combiner which resolves all ties correctly23 projected accuracies for increasingly successful levels of combination achievementfor each level we list the accuracy and the percentage of errors made by the best individual tagger that can be corrected by combination c50 was not able to cope with the large amount of data involved in all tagsword experiments and the tagscontext experiment with wotanin table 6 the results of our experiments with the various combination methods are shownagain we list both the accuracies of the combiners and the error reduction in relation to the best base tagger for example on lob tagpair produces 2321 errors which is 178 less than hmm 2824 errorsalthough the combiners generally fall short of the quotall ties correctquot level even the most trivial voting system significantly outperforms the best individual tagger on all data setswithin the simple voting systems it appears that use of more detailed voting weights does not necessarily lead to better resultstagprecision is clearly inferior to totprecisionon closer examination this could have been expectedlooking at the actual tag precision values we see that the precision is generally more dependent on the tag than on the tagger so that tagprecision always tends to select the easier tagin other words it uses less specific rather than more specific informationprecisionrecall is meant to correct this behavior by the involvement of recall valuesas intended precisionrecall generally has a higher accuracy than tagprecision but does not always improve on totprecisionour previously unconfirmed hypothesis that arbitertype combiners would be able to outperform the gangtype ones is now confirmedwith the exception of several of the tagsword versions and the tagscontext version for wsj the more sophisticated modeling systems have a significantly better accuracy than the simple voting systems on all four data setstagpair being somewhere between simple voting and stacking also falls in the middle where accuracy is concernedin general it can at most be said to stay close to the real stacking systems except for the cleanest data set lob where it is clearly being outperformedthis is a fundamental change from our earlier experiments where tagpair was significantly better than mbl and decision treesour explanation at the time that the stacked systems suffered from a lack of training data appears to be correcta closer investigation below shows at which amount of training data the crossover point in quality occurs another unresolved issue from the earlier experiments is the effect of making word or context information available to the stacked classifierswith lob and a single 114k tune set both mbl and decision trees degraded significantly when adding context and mbl degraded when adding the wordwith the increased amount of training material addition of the context generally leads to better resultsfor mbl there is a degradation only for the wsj data and of a much less pronounced naturewith the other data sets there is an improvement significantly so for lobfor decision trees there is also a limited degradation for wsj and wotanlite and a slight improvement for lobthe other two systems appear to be able to use the context more effectivelywpdv shows a relatively constant significant improvement over all data setsmaccent shows more variation with a comparable improvement on lob and wotanlite a very slight degradation on wsj and a spectacular improvement on wotan where it even yields an accuracy higher than the quotall ties correctquot leveladdition of the word is still generally counterproductiveonly wpdv sometimes manages to translate the extra information into an improvement in accuracy and even then a very small oneit would seem that vastly larger amounts of training data are necessary if the word information is to become usefulthe observations about the overall accuracies although the most important are not the only interesting oneswe can also examine the results of the experiments above in more detail evaluating the results of combination for specific words and tags and error rates for the most confusing wordsfor each word we list the total number of instances in the test set the number of tags associated with the word and then for each base tagger and wpdv the rank in the error list the absolute number of errors and the percentage of instances that is mistagged trying to discover why such disappointing results are found for wsjfurthermore we can run additional experiments to determine the effects of the size of the training set the number of base tagger components involved and the granularity of the tagsetthe overall accuracy of the various tagging systems gives a good impression of relative performance but it is also useful to have a more detailed look at the tagging resultsmost importantly for this paper the details give a better feel for the differences between the base taggers and for how well a combiner can exploit these differencesmore generally users of taggers or tagged corpora are rarely interested in the whole corpusthey focus rather on specific words or word classes for which the accuracy of tagging may differ greatly from the overall accuracywe start our detailed examination with the words that are most often mistaggedwe use the lob corpus for this evaluation as it is the cleanest data set and hence the best examplefor each base tagger and for wpdv we list the top seven mistagged words in terms of absolute numbers of errors in table 7although the base taggers have been shown to produce different errors we see that they do tend to make errors on the same words as the five topsevens together contain only nine wordsa high number of errors for a word is due to a combination of tagging difficulty and frequencyexamples of primarily difficult words are much and moreeven though they have relatively low frequencies they are ranked high on the error listswords whose high error rate stems from their difficulty can be recognized by their high error percentage scoresexamples of words whose high error rate stems from their frequency are to and inthe error percentages show that these two words are actually tagged surprisingly well as to is usually quoted as a tough case and for in the taggers have to choose between 14 possible tagsthe first place on the list is taken by as which has both a high frequency and a high difficulty level table 7 shows yet again that there are clear differences between the base taggers providing the opportunity for effective combinationfor all but one word in the combiner manages to improve on the best tagger for that specific wordif we compare to the overall best tagger hmm the improvements are sometimes spectacularthis is of course especially the case where hmm has particular difficulties with a word eg about with a 463 reduction in error rate but in other cases as well eg to with a 322 reduction which is still well above the overall error rate reduction of 243we can also abstract away from the words and simply look at common word class confusions eg a token that should be tagged vbd is actually tagged vbn table 8 shows the tag confusions that are present in the top seven confusion list of at least one of the systems used on lobthe number on the right in each system column is the number of times the error was made and the number on the left is the position in the confusion listthe rows marked with tag values show the individual errorsin addition the quotpairquot rows show the combined value of the two inverse errors preceding itas with the word errors above we see substantial differences between the base taggersunlike the situation with words there are now a number of cases where base taggers perform better than the combinerpartly this is because the base tagger is outvoted to such a degree that its quality cannot be maintained eg nn jjfurthermore it is probably unfair to look at only one half of a pairany attempt to decrease the number of errors of type x y will tend to increase the number of errors of type y xthe balance between the two is best shown in the quotpairquot rows and 26 the tags are cs subordinating conjunction in preposition jj adjective nn singular common noun rp adverbial particle vb base form of verb vbd past tense of verb vbn past participle27 rp in is not actually in any top seven but has been added to complete the last pair of inverse errors here the combiner is again performing excellently in all cases improving on the best base tagger for the pairfor an additional point of view we show the precision and recall values of the systems on the same tags in table 9 as well as the percentage of the test set that should be tagged with each specific tagthe differences between the taggers are again present and in all but two cases the combiner produces the best score for both precision and recallfurthermore as precision and recall form yet another balanced pair that is as improvements in recall tend to decrease precision and vice versa the remaining two cases can be considered to be handled quite adequately as wellseeing the rather bad overall performance of the combiners on wsj we feel the need to identify a property of the wsj material that can explain this relative lack of successa prime candidate for this property is the allegedly very low degree of consistency of the wsj materialwe can investigate the effects of the low consistency by way of comparison with the lob data set which is known to be very consistentwe have taken onetenth of the test sets of both wsj and lob and manually examined each token where the wpdv tagging differs from the benchmark taggingthe first indication that consistency is a major factor in performance is found in the basic correctness information given in table 10for wsj there is a much higher percentage where the difference in tagging is due to an erroneous tag in the benchmarkthis does not mean however that the tagger should be given a higher accuracy score as it may well be that the part of the benchmark where tagger and benchmark do agree contains a similar percentage of benchmark errorsit does imply though that the wsj tagging contains many more errors than the lob tagging which is likely to be detrimental to the derivation of automatic taggersthe cases where the tagger is found to be wrong provide interesting information as wellour examination shows that 109 of the 250 erroneous tags occur in situations that are handled rather inconsistently in the corpusin some of these situations we only have to look at the word itselfthe most numerous type of problematic word is the proper noun ending in s it appears to be unclear whether such a word should be tagged nnp or nnpswhen taking the words leading to errors in our 1 test set and examining them in the training data we see a near even split for practically every wordthe most frequent ones are securities and airlines there are only two very unbalanced cases times and savings a similar situation occurs although less frequently for common nouns for example headquarters gets 67 nn and 21 nns tagsin other cases difficult words are handled inconsistently in specific contextsexamples here are about in cases such as about 20 or about 20 ago in cases such as years ago and more in more than finally there are more general word class confusions such as adjectiveparticle or nounadjective in noun premodifying positionshere it is much harder to provide numerical examples as the problematic situation must first be recognizedwe therefore limit ourselves to a few sample phrasesthe first is stockindex which leads to several errors in combinations like stockindex futures or stockindex arbitragein the training set stockindex in premodifying position is tagged jj 64 times and nn 69 timesthe second phrase chief executive officer has three words so that we have four choices of tagging jjjjnn is chosen 90 times jjnnnn 63 times nnjjnn 33 times and nnnnnn 30 timesadmittedly all of these are problematic cases and many other cases are handled quite consistentlyhowever the inconsistently handled cases do account for 44 of the errors found for our best tagging systemunder the circumstances we feel quite justified in assuming that inconsistency is the main because of the low accuracy scores28 the most important result that has undergone a change between van halteren zavrel and daelemans and our current experiments is the relative accuracy of tagpair and stacked systems such as mblwhere tagpair used to be significantly better than mbl the roles are now well reversedit appears that our hypothesis at the time that the stacked systems were plagued by a lack of training data is correct since they can now hold their ownin order to see at which point tagpair is overtaken we have trained several systems on increasing amounts of training data from lobeach increment is one of the 10 training corpus parts described abovethe results are shown in figure 5the accuracy of combiner methods on lob as a function of the number of tokens of training materialtagpair is only best when a single part is used after that it is overtaken and quickly left behind as it is increasingly unable to use the additional training data to its advantagethe three systems using only base tagger outputs have comparable accuracy growth curves although the initial growth is much higher for wpdvthe curves for wpdv and maccent appear to be leveling out towards the right end of the graphfor mbl this is much less clearhowever it would seem that the accuracy level at 1m words is a good approximation of the eventual ceilingthe advantage of the use of context information becomes clear at 500k wordshere the tagsonly systems start to level out but wpdv keeps showing a constant growtheven at 1m words there is no indication that the accuracy is approaching a ceilingthe model seems to be getting increasingly accurate in correcting very specific contexts of mistagginganother way in which the amount of input data can be varied is by taking subsets of the set of component taggersthe relation between the accuracy of combinations for lob and that of the individual taggers is shown in table 11the first three columns show the combination the accuracy and the improvement in relation to the best componentthe other four columns show the further improvement gained when adding yet another componentthe most important observation is that every combination outperforms the combination of any strict subset of its componentsthe difference is always significant except in the cases mxphmmmbttbl vs mxphmmmbt and hmmmbttbl vs hmmmbtwe can also recognize the quality of the best component as a major factor in the quality of the combination resultshmm and mxp always add more gain than mbt which always adds more gain than tblanother major factor is the difference in language modelmxp although having a lower accuracy by itself than hmm yet leads to better combination results again witnessed by the gain columnsin some cases mxp is even able to outperform pairs of components in combination both mxpmbt and mxphmm are better than hmmmbttblthe final influence on combination that we measure is that of the granularity of the tagset which can be examined with the highly structured wotan tagsetpart of the examination has already taken place above as we have added the wotanlite tagset a less granular projection of wotanas we have seen the wotanlite taggers undeniably have a much higher accuracy than the wotan oneshowever this is hardly surprising as they have a much easier task to performin order to make a fair comparison we now measure them at their performance of the same task namely the prediction of wotanlite tagswe do this by projecting the output of the wotan taggers and wpdv to wotanlite tagsadditionally we measure all taggers at the main word class level ie after the removal of all attributes and ditto tag markersall results are listed in table 12the three major horizontal blocks each represent a level at which the correctness of the final output is measuredwithin the lower two blocks the three rows represent the type of tags used by the base taggersthe rows for wotan and wotanlite represent the actual taggers as described abovethe row for bestlite does not represent a real tagger but rather a virtual tagger that corresponds to the best tagger from among wotan and wotanlitethis choice for the best granularity is taken once for each system as a whole not per individual tokenthis leads to bestlite being always equal to wotanlite for tbl and mbt and to projected wotan for mxp and hmmthe three major vertical blocks represent combination strategies no combination combination using only the tags and combination using tags and direct contextthe two combination blocks are divided into three columns representing the tag level at which combination is performed for example for the lite column the output of the base taggers is projected to wotanlite tags which are then used as input for the combinerwe hypothesized beforehand that in general the more information a system can use the better its results areunfortunately even for the base taggers reality is not that simplefor both mxp and hmm the wotan tagger indeed yields a better wotanlite tagging than the wotanlite tagger itself thus supporting the hypothesison the other hand the results for mbt do not confirm this as here the wotanlite tagger is more accuratehowever we have already seen that mbt has severe problems in dealing with the complex wotan datafurthermore the lowered accuracy of the mbl combiners when provided with words also indicate that memorybased learning sometimes has problems in coping with a surplus of informationthis means that we have to adjust our hypothesis more information is better but only up to the point where the wealth of information overwhelms the machine learning systemwhere this point is found obviously differs for each systemfor the combiners the situation is rather inconclusivein some cases especially for wpdv combining at a higher granularity produces better resultsin others combining at a lower granularity works betterin all cases the difference in scores between the columns is extremely small and hardly supports any conclusions either waywhat is obviously much more important for the combiners is the quality of the information they can work withhere higher granularity on the part of the ingredients is preferable as combiners based on wotan taggers perform better than those based on wotanlite taggers3 and ingredient performance seems to be even more useful as bestlite yields yet better results in all casescombination of ensembles of classifiers although wellestablished in the machine learning literature has only recently been applied as a method for increasing accuracy in natural language processing tasksthere has of course always been a lot of research on the combination of different methods in hybrid systems or on the combination of different information sourcessome of that work even explicitly uses voting and could therefore also be counted as an ensemble approachfor example rigau atserias and agirre combine different heuristics for word sense disambiguation by voting and agirre et al do the same for spelling correction evaluation heuristicsthe difference between single classifiers learning to combine information sources ie their input features and the combination of ensembles of classifiers trained on subsets of those features is not always very clear anywayfor partofspeech tagging a significant increase in accuracy through combining the output of different taggers was first demonstrated in van halteren zavrel and daelemans and brill and wu in both approaches different tagger generators were applied to the same training data and their predictions combined using different combination methods including stackingyet the latter paper reported much lower accuracy improvement figuresas we now apply the methods of van halteren zavrel and daelemans to wsj as well it is easier to make a comparisonan exact comparison is still impossible as we have not used the exact same data preparation and taggers but we can put roughly corresponding figures side by side as for base taggers the first two differences are easily explained unigram has to deal with unknown words while lexprob does not and tnt is a more advanced trigram systemthe slight difference for maximum entropy might be explained by the difference in trainingtest splitwhat is more puzzling is the substantial difference for the transformationbased taggerpossible explanations are that brill and wu used a much better parametrization of this system or that they used a different version of the wsj materialbe that as it may the final results are comparable and it is clear that the lower numbers in relation to lob are caused by the choice of test material rather than by the methods usedin tufi a single tagger generator is trained on different corpora representing different language registersfor the combination a method called credibility profiles worked bestin such a profile for each component tagger information is kept about its overall accuracy its accuracy for each tag etcin another recent study marquez et al investigate several types of ensemble construction in a decision tree learning framework for tagging specific classes of ambiguous words the construction of ensembles was based on bagging selection of different subsets of features in decision tree construction and selection of different splitting criteria in decision tree constructionin all experiments simple voting was used to combine component tagger decisionsall combination approaches resulted in a better accuracy but as these error reductions refer to only part of the tagging task they are hard to compare with our own resultsin abney schapire and singer adaboost variants are used for tagging wsj materialcomponent classifiers here are based on different information sources eg capitalization of current word and the triple quotstring capitalization and tagquot of the word to the left of the current word are the basis for the training of some of their component classifiersresulting accuracy is comparable to but not better than that of the maximum entropy taggertheir approach is also demonstrated for prepositional phrase attachment again with results comparable to but not better than stateoftheart single classifier systemshigh accuracy on the same task is claimed by alegre sopena and lloberas for combining ensembles of neural networksadaboost has also been applied to text filtering and text categorization in chen bangalore and vijayshanker classifier combination is used to overcome the sparse data problem when using more contextual information in supertagging an approach in which parsing is reduced to tagging with a complex tagset when using pairwise voting on models trained using different contextual information an error reduction of 5 is achieved over the best component modelparsing is also the task to which henderson and brill apply combination methods with reductions of up to 30 precision error and 6 recall error compared to the best previously published results of single statistical parsersthis recent research shows that the combination approach is potentially useful for many nlp tasks apart from taggingour experiments have shown that at least for the word class tagging task combination of several different systems enables us to raise the performance ceiling that can be observed when using datadriven systemsfor all tested data sets combination provides a significant improvement over the accuracy of the best component taggerthe amount of improvement varies from 113 error reduction for wsj to 243 for lobthe data set that is used appears to be the primary factor in the variation especially the data set consistencyas for the type of combiner all stacked systems using only the set of proposed tags as features reach about the same performancethey are clearly better than simple voting systems at least as long as there is sufficient training datain the absence of sufficient data one has to fall back to less sophisticated combination strategiesaddition of word information does not lead to improved accuracy at least with the current training set sizehowever it might still be possible to get a positive effect by restricting the word information to the most frequent and ambiguous words onlyaddition of context information does lead to improvements for most systemswpdv and maccent make the best use of the extra information with wpdv having an edge for less consistent data and maccent for material with a high error rate although the results reported in this paper are very positive many directions for research remain to be explored in this areain particular we have high expectations for the following two directionsfirst there is reason to believe that better results can be obtained by using the probability distributions generated by the component systems rather than just their best guesses that might fruitfully be searched to yield large ensembles of modular components that are evolved to cooperate for optimal accuracyanother open question is whether and if so when combination is a worthwile technique in actual nlp applicationsafter all the natural language text at hand has to be processed by each of the base systems and then by the combinernow none of these is especially bothersome at runtime but when combining n systems the time needed to process the text can be expected to be at least a factor n1 more than when using a single systemwhether this is worth the improvement that is achieved which is as yet expressed in percents rather than in factors will depend very much on the amount of text that has to be processed and the use that is made of the resultsthere are a few clearcut cases such as a corpus annotation project where the cpu time for tagging is negligible in relation to the time needed for manual correction afterwards or information retrieval on very large text collections where the accuracy improvement does not have enough impact to justify the enormous amount of extra cpu time however most of the time the choice between combining or not combining will have to be based on evidence from carefully designed pilot experiments for which this paper can only hope to provide suggestions and encouragementthe authors would like to thank the creators of the tagger generators and classification systems used here for making their systems available and thorsten brants guy de pauw erik tjong kim sang inge de monnink the other members of the cnts ilk and tosca research groups and the anonymous reviewers for comments and discussionthis research was done while the second and third authors were at tilburg universitytheir research was done in the context of the induction of linguistic knowledge research program supported partially by the netherlands organization for scientific research
J01-2002
improving accuracy in word class tagging through the combination of machine learning systemswe examine how differences in language models learned by different datadriven systems performing the same nlp task can be exploited to yield a higher accuracy than the best individual systemwe do this by means of experiments involving the task of morphosyntactic word class tagging on the basis of three different tagged corporafour wellknown tagger generators are trained on the same corpus dataafter comparison their outputs are combined using several voting strategies and secondstage classifiersall combination taggers outperform their best componentthe reduction in error rate varies with the material in question but can be as high as 243 with the lob corpuswe report on accuracy of arounf 97 with indomain training data for pos tagging using the penn treebank
probabilistic topdown parsing and language modeling this paper describes the functioning of a broadcoverage probabilistic topdown parser and its application to the problem of language modeling for speech recognition the paper first introduces key notions in language modeling and probabilistic parsing and briefly reviews some previous approaches to using syntactic structure for language modeling a lexicalized probabilistic topdown parser is then presented which performs very well in terms of both the accuracy of returned parses and the efficiency with which they are found relative to the best broadcoverage statistical parsers a new language model that utilizes probabilistic topdown parsing is then outlined and empirical results show that it improves upon previous work in test corpus perplexity interpolation with a trigram model yields an exceptional improvement relative to the improvement observed by other models demonstrating the degree to which the information captured by our parsing model is orthogonal to that captured by a trigram model a small recognition experiment also demonstrates the utility of the model this paper describes the functioning of a broadcoverage probabilistic topdown parser and its application to the problem of language modeling for speech recognitionthe paper first introduces key notions in language modeling and probabilistic parsing and briefly reviews some previous approaches to using syntactic structure for language modelinga lexicalized probabilistic topdown parser is then presented which performs very well in terms of both the accuracy of returned parses and the efficiency with which they are found relative to the best broadcoverage statistical parsersa new language model that utilizes probabilistic topdown parsing is then outlined and empirical results show that it improves upon previous work in test corpus perplexityinterpolation with a trigram model yields an exceptional improvement relative to the improvement observed by other models demonstrating the degree to which the information captured by our parsing model is orthogonal to that captured by a trigram modela small recognition experiment also demonstrates the utility of the modelwith certain exceptions computational linguists have in the past generally formed a separate research community from speech recognition researchers despite some obvious overlap of interestperhaps one reason for this is that until relatively recently few methods have come out of the natural language processing community that were shown to improve upon the very simple language models still standardly in use in speech recognition systemsin the past few years however some improvements have been made over these language models through the use of statistical methods of natural language processing and the development of innovative linguistically wellmotivated techniques for improving language models for speech recognition is generating more interest among computational linguistswhile language models built around shallow local dependencies are still the standard in stateoftheart speech recognition systems there is reason to hope that better language models can and will be developed by computational linguists for this taskthis paper will examine language modeling for speech recognition from a natural language processing point of viewsome of the recent literature investigating approaches that use syntactic structure in an attempt to capture longdistance dependencies for language modeling will be revieweda new language model based on probabilistic topdown parsing will be outlined and compared with the previous literature and extensive empirical results will be presented which demonstrate its utilitytwo features of our topdown parsing approach will emerge as key to its successfirst the topdown parsing algorithm builds a set of rooted candidate parse trees from left to right over the string which allows it to calculate a generative probability for each prefix string from the probabilistic grammar and hence a conditional probability for each word given the previous words and the probabilistic grammara lefttoright parser whose derivations are not rooted ie with derivations that can consist of disconnected tree fragments such as an lr or shiftreduce parser cannot incrementally calculate the probability of each prefix string being generated by the probabilistic grammar because their derivations include probability mass from unrooted structuresonly at the point when their derivations become rooted can generative string probabilities be calculated from the grammarthese parsers can calculate word probabilities based upon the parser stateas in chelba and jelinek but such a distribution is not generative from the probabilistic grammara parser that is not left to right but which has rooted derivations eg a headfirst parser will be able to calculate generative joint probabilities for entire strings however it will not be able to calculate probabilities for each word conditioned on previously generated words unless each derivation generates the words in the string in exactly the same orderfor example suppose that there are two possible verbs that could be the head of a sentencefor a headfirst parser some derivations will have the first verb as the head of the sentence and the second verb will be generated after the first hence the second verb probability will be conditioned on the first verbother derivations will have the second verb as the head of the sentence and the first verb probability will be conditioned on the second verbin such a scenario there is no way to decompose the joint probability calculated from the set of derivations into the product of conditional probabilities using the chain ruleof course the joint probability can be used as a language model but it cannot be interpolated on a wordbyword basis with say a trigram model which we will demonstrate is a useful thing to dothus our topdown parser allows for the incremental calculation of generative conditional word probabilities a property it shares with other lefttoright parsers with rooted derivations such as earley parsers or leftcorner parsers a second key feature of our approach is that topdown guidance improves the efficiency of the search as more and more conditioning events are extracted from the derivation for use in the probabilistic modelbecause the rooted partial derivation is fully connected all of the conditioning information that might be extracted from the topdown left context has already been specified and a conditional probability model built on this information will not impose any additional burden on the searchin contrast an earley or leftcorner parser will underspecify certain connections between constituents in the left context and if some of the underspecified information is used in the conditional probability model it will have to become specifiedof course this can be done but at the expense of search efficiency the more that this is done the less benefit there is from the underspecificationa topdown parser will in contrast derive an efficiency benefit from precisely the information that is underspecified in these other approachesthus our topdown parser makes it very easy to condition the probabilistic grammar on an arbitrary number of values extracted from the rooted fully specified derivationthis has lead us to a formulation of the conditional probability model in terms of values returned from treewalking functions that themselves are contextually sensitivethe topdown guidance that is provided makes this approach quite efficient in practicethe following section will provide some background in probabilistic contextfree grammars and language modeling for speech recognitionthere will also be a brief review of previous work using syntactic information for language modeling before we introduce our model in section 4three parse trees a complete parse tree a complete parse tree with an explicit stop symbol and a partial parse treethis section will introduce probabilistic contextfree grammars as well as such notions as complete and partial parse trees which will be important in defining our language model later in the paperin addition we will explain some simple grammar transformations that will be usedfinally we will explain the notion of ccommand which will be used extensively later as wellpcfgs model the syntactic combinatorics of a language by extending conventional contextfree grammars a cfg g consists of a set of nonterminal symbols v a set of terminal symbols t a start symbol st e v and a set of rule productions p of the form a a where a e these contextfree rules can be interpreted as saying that a nonterminal symbol a expands into one or more either nonterminal or terminal symbols a x0 xk2 a sequence of contextfree rule expansions can be represented in a tree with parents expanding into one or more children below them in the treeeach of the individual local expansions in the tree is a rule in the cfgnodes in the tree with no children are called leavesa tree whose leaves consist entirely of terminal symbols is completeconsider for example the parse tree shown in in figure 1 the start symbol is st which expands into an s the s node expands into an np followed by a vpthese nonterminal nodes each in turn expand and this process of expansion continues until the tree generates the terminal string quotspot chased the ballquot as leavesa cfg g defines a language lg which is a subset of the set of strings of terminal symbols including only those that are leaves of complete trees rooted at st built with rules from the grammar g we will denote strings either as w or as wowi wn where wn is understood to be the last terminal symbol in the stringfor simplicity in displaying equations from this point forward let w be the substring wjlet twg be the set of all complete trees rooted at the start symbol with the string of terminals zug as leaveswe call tzq the set of complete parses of wga pcfg is a cfg with a probability assigned to each rule specifically each righthand side has a probability given the lefthand side of the rulethe probability of a parse tree is the product of the probabilities of each rule in the treeprovided a pcfg is consistent which it always will be in the approach we will be advocating this defines a proper probability distribution over completed treesa pcfg also defines a probability distribution over strings of words in the following way the intuition behind equation 1 is that if a string is generated by the pcfg then it will be produced if and only if one of the trees in the set tzq generated itthus the probability of the string is the probability of the set twg ie the sum of its members probabilitiesup to this point we have been discussing strings of words without specifying whether they are quotcompletequot strings or notwe will adopt the convention that an explicit beginning of string symbol and an explicit end symbol are part of the vocabulary and a string wg is a complete string if and only if tao is and tv is since the beginning of string symbol is not predicted by language models but rather is axiomatic in the same way that st is for a parser we can safely omit it from the current discussion and simply assume that it is theresee figure 1 for the explicit representationwhile a complete string of words must contain the end symbol as its final word a string prefix does not have this restrictionfor example quotspot chased the ball quot is a complete string and the following is the set of prefix strings of this complete string quotspotquot quotspot chasedquot quotspot chased thequot quotspot chased the ballquot and quotspot chased the ball usquota pcfg also defines a probability distribution over string prefixes and we will present this in terms of partial derivationsa partial derivation d is defined with respect to a prefix string w as follows it is the leftmost derivation of the string with wj on the righthand side of the last expansion in the derivationlet dw be the set of all partial derivations for a prefix string 4then we leftfactor the pcfg so that all productions are binary except those with a single terminal on the righthand side and epsilon productionswe do this because it delays predictions about what nonterminals we expect later in the string until we have seen more of the stringin effect this is an underspecification of some of the predictions that our topdown parser is making about the rest of the stringthe leftfactorization transform that we use is identical to what is called right binarization in roark and johnson see that paper for more discussion of the benefits of two parse trees a complete leftfactored parse tree with epsilon productions and an explicit stop symbol and a partial leftfactored parse tree factorization for topdown and leftcorner parsingfor a grammar g we define a factored grammar gf as follows we can see the effect of this transform on our example parse trees in figure 2this underspecification of the nonterminal predictions allows lexical items to become part of the left context and so be used to condition production probabilities even the production probabilities of constituents that dominate them in the unfactored treeit also brings words further downstream into the lookahead at the point of specificationnote that partial trees are defined in exactly the same way but that the nonterminal yields are made up exclusively of the composite nonterminals introduced by the grammar transformthis transform has a couple of very nice propertiesfirst it is easily reversible ie every parse tree built with gf corresponds to a unique parse tree built with g second if we use the relative frequency estimator for our production probabilities the probability of a tree built with gf is identical to the probability of the corresponding tree built with g finally let us introduce the term ccommandwe will use this notion in our conditional probability model and it is also useful for understanding some of the previous work in this areathe simple definition of ccommand that we will be using in this paper is the following a node a ccommands a node b if and only if a does not dominate b and the lowest branching node that dominates a also dominates bthus in figure 1 the subject np and the vp each ccommand the other because neither dominates the other and the lowest branching node above both dominates the othernotice that the subject np ccommands the object np but not vice versa since the lowest branching node that dominates the object np is the vp which does not dominate the subject npthis section will briefly introduce language modeling for statistical speech recognitionin language modeling we assign probabilities to strings of wordsto assign a probability the chain rule is generally invokedthe chain rule states for a string of k1 words a markov language model of order n truncates the conditioning information in the chain rule to include only the previous n wordsthese models are commonly called ngram modelsthe standard language model used in many speech recognition systems is the trigram model ie a markov model of order 2 which can be characterized by the following equation to smooth the trigram models that are used in this paper we interpolate the probability estimates of higherorder markov models with lowerorder markov models the idea behind interpolation is simple and it has been shown to be very effectivefor an interpolated gram here p is the empirically observed relative frequency and an is a function from vn to 0 1this interpolation is recursively applied to the smallerorder ngrams until the bigram is finally interpolated with the unigram ie ao 1there have been attempts to jump over adjacent words to words farther back in the left context without the use of dependency links or syntactic structure for example saul and pereira and rosenfeld we will focus our very brief review however on those that use grammars or parsing for their language modelsthese can be divided into two rough groups those that use the grammar as a language model and those that use a parser to uncover phrasal heads standing in an important relation to the current wordthe approach that we will subsequently present uses the probabilistic grammar as its language model but only includes probability mass from those parses that are found that is it uses the parser to find a subset of the total set of parses and uses the sum of their probabilities as an estimate of the true probability given the grammaras mentioned in section 21 a pcfg defines a probability distribution over strings of wordsone approach to syntactic language modeling is to use this distribution directly as a language modelthere are efficient algorithms in the literature for calculating exact string prefix probabilities given a pcfgthe algorithms both utilize a leftcorner matrix which can be calculated in closed form through matrix inversionthey are limited therefore to grammars where the nonterminal set is small enough to permit inversionstring prefix probabilities can be straightforwardly used to compute conditional word probabilities by definition stolcke and segal and jurafsky et al used these basic ideas to estimate bigram probabilities from handwritten pcfgs which were then used in language modelsinterpolating the observed bigram probabilities with these calculated bigrams led in both cases to improvements in word error rate over using the observed bigrams alone demonstrating that there is some benefit to using these syntactic language models to generalize beyond observed ngramsanother approach that uses syntactic structure for language modeling has been to use a shiftreduce parser to quotsurfacequot ccommanding phrasal headwords or partofspeech tags from arbitrarily far back in the prefix string for use in a trigramlike modela shiftreduce parser operates from left to right using a stack and a pointer to the next word in the input string9 each stack entry consists minimally of a nonterminal labelthe parser performs two basic operations shifting which involves pushing the pos label of the next word onto the stack and moving the pointer to the following word in the input string and reducing which takes the top k stack entries and replaces them with a single new entry the nonterminal label of which is the lefthand side of a rule in the grammar that has the k top stack entry labels on the righthand sidefor example if there is a rule np dt nn and the top two stack entries are nn and dt then those two entries can be popped off of the stack and an entry with the label np pushed onto the stackgoddeau used a robust deterministic shiftreduce parser to condition word probabilities by extracting a specified number of stack entries from the top of the current state and conditioning on those entries in a way similar to an ngramin empirical trials goddeau used the top two stack entries to condition the word probabilityhe was able to reduce both sentence and word error rates on the atis corpus using this methodthe structured language model used in chelba and jelinek jelinek and chelba and chelba is similar to that of goddeau except that their shiftreduce parser follows a nondeterministic beam search and each stack entry contains in addition to the nonterminal node label the headword of the constituentthe slm is like a trigram except that the conditioning words are taken from the tops of the stacks of candidate parses in the beam rather than from the linear order of the stringtheir parser functions in three stagesthe first stage assigns a probability to the word given the left context the second stage predicts the pos given the word and the left contextthe last stage performs all possible parser operations when there is no more parser work to be done the following word is predictedand so on until the end of the stringeach different pos assignment or parser operation is a step in a derivationeach distinct derivation path within the beam has a probability and a stack state associated with itevery stack entry has a nonterminal node label and a designated headword of the constituentwhen all of the parser operations have finished at a particular point in the string the next word is predicted as follows for each derivation in the beam the headwords of the two topmost stack entries form a trigram with the conditioned wordthis interpolated trigram probability is then multiplied by the normalized probability of the derivation to provide that derivation contribution to the probability of the wordmore precisely for a beam of derivations d where hod and hid are the lexical heads of the top two entries on the stack of d figure 3 gives a partial tree representation of a potential derivation state for the string quotthe dog chased the cat with spotsquot at the point when the word quotwithquot is to be predictedthe shiftreduce parser will have perhaps built the structure shown and the stack state will have an np entry with the head quotcatquot at the top of the stack and a vbd entry with the head quotchasedquot second on the stackin the chelba and jelinek model the probability of quotwithquot is conditioned on these two headwords for this derivationsince the specific results of the slm will be compared in detail with our model when the empirical results are presented at this point we will simply state that they have achieved a reduction in both perplexity and word error rate over a standard trigram using this modelthe rest of this paper will present our parsing model its application to language modeling for speech recognition and empirical resultsstatistically based heuristic bestfirst or beamsearch strategies have yielded an enormous improvement in the quality and speed of parsers even without any guarantee that the parse returned is in fact that with the maximum likelihood for the probability modelthe parsers with the highest published broadcoverage parsing accuracy which include charniak collins and ratnaparkhi all utilize simple and straightforward statistically based search heuristics pruning the searchspace quite dramaticallysuch methods are nearly always used in conjunction with some form of dynamic programming that is search efficiency for these parsers is improved by both statistical search heuristics and dphere we will present a parser that uses simple search heuristics of this sort without dpour approach is found to yield very accurate parses efficiently and in addition to lend itself straightforwardly to estimating word probabilities online that is in a single pass from left to rightthis online characteristic allows our language model to be interpolated on a wordbyword basis with other models such as the trigram yielding further improvementsnext we will outline our conditional probability model over rules in the pcfg followed by a presentation of the topdown parsing algorithmwe will then present empirical results in two domains one to compare with previous work in the parsing literature and the other to compare with previous work using parsing for language modeling for speech recognition in particular with the chelba and jelinek results mentioned abovea simple pcfg conditions rule probabilities on the lefthand side of the ruleit has been shown repeatedlyeg briscoe and carroll charniak collins inui et al johnson that conditioning the probabilities of structures on the context within which they appear for example on the lexical head of a constituent on the label of its parent nonterminal or ideally on both and many other things besides leads to a much better parsing model and results in higher parsing accuraciesone way of thinking about conditioning the probabilities of productions on contextual information is as annotating the extra conditioning information onto the labels in the contextfree rulesexamples of this are bilexical grammarssuch as eisner and satta charniak collins where the lexical heads of each constituent are annotated on both the right and lefthand sides of the contextfree rules under the constraint that every constituent inherits the lexical head from exactly one of its children and the lexical head of a pos is its terminal itemthus the rule s np vp becomes for instance s barks np dog vpbarksone way to estimate the probabilities of these rules is to annotate the heads onto the constituent labels in the training corpus and simply count the number of times particular productions occur this procedure yields conditional probability distributions of constituents on the righthand side with their lexical heads given the lefthand side constituent and its lexical headthe same procedure works if we annotate parent information onto constituentsthis is how johnson conditioned the probabilities of productions the lefthand side is no longer for example s but rather si sbar ie an s with sbar as parentnotice however that in this case the annotations on the righthand side are predictable from the annotation on the lefthand side so that the relative frequency estimator yields conditional probability distributions of the original rules given the parent of the lefthand sideall of the conditioning information that we will be considering will be of this latter sort the only novel predictions being made by rule expansions are the node labels of the constituents on the righthand sideeverything else is already specified by the left contextwe use the relative frequency estimator and smooth our production probabilities by interpolating the relative frequency estimates with those obtained by quotannotatingquot less contextual informationthis perspective on conditioning production probabilities makes it easy to see that in essence by conditioning these probabilities we are growing the state spacethat is the number of distinct nonterminals grows to include the composite labels so does the number of distinct productions in the grammarin a topdown parser each rule expansion is made for a particular candidate parse which carries with it the entire rooted derivation to that point in a sense the lefthand side of the rule is annotated with the entire left context and the rule probabilities can be conditioned on any aspect of this derivationwe do not use the entire left context to condition the rule probabilities but rather quotpickandchoosequot which events in the left context we would like to condition onone can think of the conditioning events as functions which take the partial tree structure as an argument and return a value upon which the rule probability can be conditionedeach of these functions is an algorithm for walking the provided tree and returning a valuefor example suppose that we want to condition the probability of the rule a awe might write a function that takes the partial tree finds the parent of the lefthand side of the rule and returns its node labelif the lefthand side has no parent the function returns the null value we might write another function that returns the nonterminal label of the closest sibling to the left of a and null if no such node existswe can then condition the probability of the production on the values that were returned by the set of functionsrecall that we are working with a factored grammar so some of the nodes in the factored tree have nonterminal labels that were created by the factorization and may not be precisely what we want for conditioning purposesin order to avoid any confusions in identifying the nonterminal label of a particular rule production in either its factored or rionfactored version we introduce the function constituent for every nonterminal in the factored grammar gf which is simply the label of the constituent whose factorization results in afor example in figure 2 constituent is simply npnote that a function can return different values depending upon the location in the tree of the nonterminal that is being expandedfor example suppose that we have a function that returns the label of the closest sibling to the left of constituent or null if no such node existsthen a subsequent function could be defined as follows return the parent of the parent of constituent only if constituent has no sibling to the leftin other words if the previous function returns null otherwise return the second closest sibling to the left of constituent or as always null if no such node existsif the function returns for example np this could either mean that the grandparent is np or the second closest sibling is conditional probability model represented as a decision tree identifying the location in the partial parse tree of the conditioning informationnp yet there is no ambiguity in the meaning of the function since the result of the previous function disambiguates between the two possibilitiesthe functions that were used for the present study to condition the probability of the rule a a are presented in figure 4 in a tree structurethis is a sort of decision tree for a treewalking algorithm to decide what value to return for a given partial tree and a given depthfor example if the algorithm is asked for the value at level 0 it will return a the lefthand side of the rule being expandedquot suppose the algorithm is asked for the value at level 4after level 2 there is a branch in the decision treeif the lefthand side of the rule is a pos and there is no sibling to the left of constituent in the derivation then the algorithm takes the right branch of the decision tree to decide what value to return otherwise the left branchsuppose it takes the left branchthen after level 3 there is another branch in the decision treeif the lefthand side of the production is a pos then the algorithm takes the right branch of the decision tree and returns the pos of the closest ccommanding lexical head to a which it finds by walking the parse tree if the lefthand side of the rule is not a pos then the algorithm returns the closest sibling to the left of the parent of constituent the functions that we have chosen for this paper follow from the intuition that what helps parsing is different depending on the constituent that is being expandedpos nodes have lexical items on the righthand side and hence can bring into the model some of the headhead dependencies that have been shown to be so effectiveif the pos is leftmost within its constituent then very often the lexical item is sensitive to the governing category to which it is attachingfor example if the pos is a preposition then its probability of expanding to a particular word is very different if it is attaching to a noun phrase than if it is attaching to a verb phrase and perhaps quite different depending on the head of the constituent to which it is attachingsubsequent poss within a constituent are likely to be openclass words and less dependent on these sorts of attachment preferencesconditioning on parents and siblings of the lefthand side has proven to be very usefulto understand why this is the case one need merely to think of vp expansionsif the parent of a vp is another vp then the distribution over productions is different than if the parent is an s conditioning on head information both pos of the head and the lexical item itself has proven useful as well although given our parser lefttoright orientation in many cases the head has not been encountered within the particular constituentin such a case the head of the last child within the constituent is used as a proxy for the constituent headall of our conditioning functions with one exception return either parent or sibling node labels at some specific distance from the lefthand side or head information from ccommanding constituentsthe exception is the function at level 5 along the left branch of the tree in figure 4suppose that the node being expanded is being conjoined with another node which we can tell by the presence or absence of a cc nodein that case we want to condition the expansion on how the conjoining constituent expandedin other words this attempts to capture a certain amount of parallelism between the expansions of conjoined categoriesin presenting the parsing results we will systematically vary the amount of conditioning information so as to get an idea of the behavior of the parserwe will refer to the amount of conditioning by specifying the deepest level from which a value is returned for each branching path in the decision tree from left to right in figure 4 the first number is for left contexts where the left branch of the decision tree is always followed the second number is for a left branch followed by a right branch and the third number is for the contexts where the right branch is always followed for example would represent a conditional probability model that returns null for all functions below level 4 in all contexts returns null for all functions below level 3 if the lefthand side is a pos and returns null for all functions below level 2 for nonleftmost pos expansionstable 1 gives a breakdown of the different levels of conditioning information used in the empirical trials with a mnemonic label that will be used when presenting resultsthese different levels were chosen as somewhat natural points at which to observe how much of an effect increasing the conditioning information haswe first include structural information from the context namely node labels from constituents in the left contextthen we add lexical information first for nonpos expansions then for leftmost pos expansions then for all expansionsall of the conditional probabilities are linearly interpolatedfor example the probability of a rule conditioned on six events is the linear interpolation of two probabilities the empirically observed relative frequency of the rule when the six events cooccur and the probability of the rule conditioned on the first five events the interpolation coefficients are a function of the frequency of the set of conditioning events and are estimated by iteratively adjusting the coefficients so as to maximize the likelihood of a heldout corpusthis was an outline of the conditional probability model that we used for the pcfgthe model allows us to assign probabilities to derivations which can be used by the parsing algorithm to decide heuristically which candidates are promising and should be expanded and which are less promising and should be prunedwe now outline the topdown parsing algorithmthis parser is essentially a stochastic version of the topdown parser described in aho sethi and ullman it uses a pcfg with a conditional probability model of the sort defined in the previous sectionwe will first define candidate analysis and then a derives relation between candidate analyseswe will then present the algorithm in terms of this relationthe parser takes an input string 4 a pcfg g and a priority queue of candidate analysesa candidate analysis c which is a measure of the likelihood of the stack s rewriting with w at its left cornerwe can define a derives relation denoted between two candidate analyses as follows if and only if12 the parse begins with a single candidate analysis on the priority queue st 1 1 4next the top ranked candidate analysis c is popped from the priority queueif s and w vs then the analysis is completeotherwise all c such that c c are pushed onto the priority queuewe implement this as a beam searchfor each word position i we have a separate priority queue h of analyses with lookahead wwhen there are quotenoughquot analyses by some criteria on priority queue h1 all candidate analyses remaining on h are discardedsince w all parses that are pushed onto 141 are completethe parse on hii with the highest probability is returned for evaluationin the case that no complete parse is found a partial parse is returned and evaluatedthe lap is the probability of a particular terminal being the next leftcorner of a particular analysisthe terminal may be the left corner of the topmost nonterminal on the stack of the analysis or it might be the left corner of the nth nonterminal after the top n 1 nonterminals have rewritten to e of course we cannot expect to have adequate statistics for each nonterminalword pair that we encounter so we smooth to the possince we do not know the pos for the word we must sum the lap for all pos for a pcfg g a stack s ao an and pthe same empirical probability p is collected for every preterminal x as wellthe lap approximation for a given stack state and lookahead terminal is where pg aajp e po xcoti xev the lambdas are a function of the frequency of the nonterminal a1 in the standard way the beam threshold at word w is a function of the probability of the topranked candidate analysis on priority queue kiwi and the number of candidates on h11the basic idea is that we want the beam to be very wide if there are few analyses that have been advanced but relatively narrow if many analyses have been advancedif p is the probability of the highestranked analysis on h11 then another analysis is discarded if its probability falls below pf where y is an initial parameter which we call the base beam factorfor the current study y was 1011 unless otherwise noted and f y1111113thus if 100 analyses have already been pushed onto then a candidate analysis must have a probability above 1053 to avoid being prunedafter 1000 candidates the beam has narrowed to 102pthere is also a maximum number of allowed analyses on h in case the parse fails to advance an analysis to h11this was typically 10000as mentioned in section 21 we leftfactor the grammar so that all productions are binary except those with a single terminal on the righthand side and epsilon productionsthe only cproductions are those introduced by leftfactorizationour factored grammar was produced by factoring the trees in the training corpus before grammar induction which proceeded in the standard way by counting rule frequenciesthe empirical results will be presented in three stages trials to examine the accuracy and efficiency of the parser trials to examine its effect on test corpus perplexity and recognition performance and trials to examine the effect of beam variation on these performance measuresbefore presenting the results we will introduce the methods of evaluationperplexity is a standard measure within the speech recognition community for comparing language modelsin principle if two models are tested on the same test corpus the model that assigns the lower perplexity to the test corpus is the model closest to the true distribution of the language and thus better as a prior model for speech recognitionperplexity is the exponential of the cross entropy which we will define nextgiven a random variable x with distribution p and a probability model q the cross entropy h is defined as follows let p be the true distribution of the languagethen under certain assumptions given a large enough sample the sample mean of the negative log probability of a model will converge to its cross entropy with the true mode114 that is where wold is a string of the language l in practice one takes a large sample of the language and calculates the negative log probability of the sample normalized by its size15 the lower the cross entropy the better the modelusually this is reported in terms of perplexity which we will do as wellsome of the trials discussed below will report results in terms of word andor sentence error rate which are obtained when the language model is embedded in a speech recognition systemword error rate is the number of deletion insertion or substitution errors per 100 wordssentence error rate is the number of sentences with one or more errors per 100 sentencesstatistical parsers are typically evaluated for accuracy at the constituent level rather than simply whether or not the parse that the parser found is completely correct or nota constituent for evaluation purposes consists of a label and a span for example in figure 1 there is a vp that spans the words quotchased the ballquotevaluation is carried out on a handparsed test corpus and the manual parses are treated as correctwe will call the manual parse gold and the parse that the parser returns testprecision is the number of common constituents in gold and test divided by the number of constituents in testrecall is the number of common constituents in gold and test divided by the number of constituents in goldfollowing standard practice we will be reporting scores only for nonpartofspeech constituents which are called labeled recall and labeled precision sometimes in figures we will plot their average and also what can be termed the parse error which is one minus their averagelr and lp are part of the standard set of parseval measures of parser quality from this set of measures we will also include the crossing bracket scores average crossing brackets percentage of sentences with no crossing brackets and the percentage of sentences with two crossing brackets or fewer in addition we show the average number of rule expansions considered per word that is the number of rule expansions for which a probability was calculated and the average number of analyses advanced to the next priority queue per wordthis is an incremental parser with a pruning strategy and no backtrackingin such a model it is possible to commit to a set of partial analyses at a particular point that cannot be completed given the rest of the input string in such a case the parser fails to return a complete parsein the event that no complete parse is found the highest initially ranked parse on the last nonempty priority queue is returnedall unattached words are then attached at the highest level in the treein such a way we predict no new constituents and all incomplete constituents are closedthis structure is evaluated for precision and recall which is entirely appropriate for these incomplete as well as complete parsesif we fail to identify nodes later in the parse recall will suffer and if our early predictions were bad both precision and recall will sufferof course the percentage of these failures are reported as wellthe first set of results looks at the performance of the parser on the standard corpora for statistical parsing trials sections 221 of the penn treebank served as the training data section 24 as the heldout data for parameter estimation and section 23 as the test datasection 22 served as the development corpus on which the parser was tested until stable versions were ready to run on the test data to avoid developing the parser to fit the specific test datatable 2 shows trials with increasing amounts of conditioning information from the left contextthere are a couple of things to notice from these resultsfirst and least surprising is that the accuracy of the parses improved as we conditioned on more and more informationlike the nonlexicalized parser in roark and johnson we found that the search efficiency in terms of number of rule expansions considered or number of analyses advanced also improved as we increased the amount of conditioningunlike the roark and johnson parser however our coverage did not substantially drop as the amount of conditioning information increased and in some cases coverage improved slightlythey did not smooth their conditional probability estimates and blamed sparse data for their decrease in coverage as they increased the conditioning informationthese results appear to support this since our smoothed model showed no such tendencyfigure 5 shows the reduction in parser error 1 lrlp and the reduction in rule expansions considered as the conditioning information increasedthe bulk of the improvement comes from simply conditioning on the labels of the parent and the closest sibling to the node being expandedinterestingly conditioning all pos expansions on two ccommanding heads made no difference in accuracy compared to conditioning only leftmost pos expansions on a single ccommanding head but it did improve the efficiencythese results achieved using very straightforward conditioning events and considering only the left context are within one to four points of the best published observed running time on section 23 of the penn treebank with the full conditional probability model and beam of 1011 using one 300 mhz ultrasparc processor and 256mb of ram of a sun enterprise 450 accuracies cited aboveof the 2416 sentences in the section 728 had the totally correct parse 301 percent tree accuracyalso the parser returns a set of candidate parses from which we have been choosing the top ranked if we use an oracle to choose the parse with the highest accuracy from among the candidates we find an average labeled precisionrecall of 941 for sentences of length 100the parser thus could be used as a front end to some other model with the hopes of selecting a more accurate parse from among the final candidateswhile we have shown that the conditioning information improves the efficiency in terms of rule expansions considered and analyses advanced what does the efficiency of such a parser look like in practicefigure 6 shows the observed time at our standard base beam of 1011 with the full conditioning regimen alongside an approximation of the reported observed time in ratnaparkhi our observed times look polynomial which is to be expected given our pruning strategy the denser the competitors within a narrow probability range of the best analysis the more time will be spent working on these competitors and the farther along in the sentence the more chance for ambiguities that can lead to such a situationwhile our observed times are not linear and are clearly slower than his times they are quite respectably fastthe differences between a kbest and a beamsearch parser make a running time difference unsurprisingwhat is perhaps surprising is that the difference is not greaterfurthermore this is quite a large beam so that very large improvements in efficiency can be had at the expense of the number of analyses that are retainedthe next set of results will highlight what recommends this approach most the ease with which one can estimate string probabilities in a single pass from left to right across the stringby definition a pcfg estimate of a string probability is the sum of the probabilities of all trees that produce the string as terminal leaves in the beam search approach outlined above we can estimate the string probability in the same manner by summing the probabilities of the parses that the algorithm findssince this is not an exhaustive search the parses that are returned will be a subset of the total set of trees that would be used in the exact pcfg estimate hence the estimate thus arrived at will be bounded above by the probability that would be generated from an exhaustive searchthe hope is that a large amount of the probability mass will be accounted for by the parses in the beamthe method cannot overestimate the probability of the stringrecall the discussion of the grammar models above and our definition of the set of partial derivations dw with respect to a prefix string wil by definition note that the numerator at word wj is the denominator at word w11 so that the product of all of the word probabilities is the numerator at the final word namely the string prefix probabilitywe can make a consistent estimate of the string probability by similarly summing over all of the trees within our beamlet ht be the priority queue h before any processing has begun with word w in the lookaheadthis is a subset of the possible leftmost partial derivations with respect to the prefix string w since rv is produced by expanding only analyses on priority queue h the set of complete trees consistent with the partial derivations on priority queue ht is a subset of the set of complete trees consistent with the partial derivations on priority queue ht that is the total probability mass represented by the priority queues is monotonically decreasingthus conditional word probabilities defined in a way consistent with equation 14 will always be between zero and oneour conditional word probabilities are calculated as follows as mentioned above the model cannot overestimate the probability of a string because the string probability is simply the sum over the beam which is a subset of the possible derivationsby utilizing a figure of merit to identify promising analyses we are simply focusing our attention on those parses that are likely to have a high probability and thus we are increasing the amount of probability mass that we do capture of the total possibleit is not part of the probability model itselfsince each word is losing some probability mass the probability model is not quotproper quotthe sum of the probabilities over the vocabulary is less than onein order to have a proper probability distribution we would need to renormalize by dividing by some factornote however that this renormalization factor is necessarily less than one and thus would uniformly increase each word probability under the model that is any perplexity results reported below will be higher than the quottruequot perplexity that would be assigned with a properly normalized distributionin other words renormalizing would make our perplexity measure lower stillthe hope however is that the improved parsing model provided by our conditional probability model will cause the distribution over structures to be more peaked thus enabling us to capture more of the total probability mass and making this a fairly snug upper bound on the perplexityone final note on assigning probabilities to strings because this parser does garden path on a small percentage of sentences this must be interpolated with another estimate to ensure that every word receives a probability estimatein our trials we used the unigram with a very small mixing coefficient following words since the denominator is zerothus chelba and jelinek also used a parser to help assign word probabilities via the structured language model outlined in section 32they trained and tested the slm on a modified more quotspeechlikequot version of the penn treebanktheir modifications included removing orthographic cues to structure replacing all numbers with the single token n and closing the vocabulary at 10000 replacing all other words with the unk tokenthey used sections 0020 as the development set sections 2122 as the check set and tested on sections 2324 we obtained the training and testing corpora from them and also created intermediate corpora upon which only the first two modifications were carried out differences in performance will give an indication of the impact on parser performance of the different modifications to the corporaall trials in this section used sections 0020 for counts held out 2122 and tested on 2324table 3 shows several thingsfirst it shows relative performance for unmodified no punct and cj corpora with the full set of conditioning informationwe can see that removing the punctuation causes a dramatic drop in the accuracy and efficiency of the parserinterestingly it also causes coverage to become nearly total with failure on just two sentences per thousand on averagewe see the familiar pattern in the cj corpus results of improving performance as the amount of conditioning information growsin this case we have perplexity results as well and figure 7 shows the reduction in parser error rule expansions and perplexity as the amount of conditioning information growswhile all three seem to be similarly improved by the addition of structural context the addition of ccommanding heads has only a moderate effect on the parser accuracy but a very large effect on the perplexitythe fact that the efficiency was improved more than the accuracy in this case seems to indicate that this additional information is causing the distribution to become more peaked so that fewer analyses are making it into the beamreduction in average precisionrecall error number of rule expansions and perplexity as conditioning increasestable 4 compares the perplexity of our model with chelba and jelinek on the same training and testing corporawe built an interpolated trigram model to serve as a baseline and also interpolated our model perplexity with the trigram using the same mixing coefficient as they did in their trials the trigram model was also trained on sections 0020 of the cj corpustrigrams and bigrams were binned by the total count of the conditioning words in the training corpus and maximum likelihood mixing coefficients were calculated for each bin to mix the trigram with bigram and unigram estimatesour trigram model performs at almost exactly the same level as theirs does which is what we would expectour parsing model perplexity improves upon their first result fairly substantially but is only slightly better than their second resulthowever when we interpolate with the trigram we see that the additional improvement is greater than the one they experiencedthis is not surprising since our conditioning information is in many ways orthogonal to that of the trigram insofar as it includes the probability mass of the derivations in contrast their model in some instances is very close to the trigram by conditioning on two words in the prefix string which may happen to be the two adjacent wordsthese results are particularly remarkable given that we did not build our model as a language model per se but rather as a parsing modelthe perplexity improvement was achieved by simply taking the existing parsing model and applying it with no extra training beyond that done for parsingthe hope was expressed above that our reported perplexity would be fairly close to the quottruequot perplexity that we would achieve if the model were properly normalized ie that the amount of probability mass that we lose by pruning is smallone way to test this is the following at each point in the sentence calculate the conditional probability of each word in the vocabulary given the previous words and sum themif there is little loss of probability mass the sum should be close to onewe did this for the first 10 sentences in the test corpus a total of 213 words one of the sentences was a failure so that 12 of the word probabilities were not estimated by our modelof the remaining 201 words the average sum of the probabilities over the 10000word vocabulary was 09821 with a minimum of 07960 and a maximum of 09997interestingly at the word where the failure occurred the sum of the probabilities was 09301in order to get a sense of whether these perplexity reduction results can translate to improvement in a speech recognition task we performed a very small preliminary experiment on nbest liststhe darpa 93 hub1 test setup consists of 213 utterances read from the wall street journal a total of 3446 wordsthe corpus comes with a baseline trigram model using a 20000word open vocabulary and trained on approximately 40 million wordswe used ciprian chelba a decoder to find the 50 best hypotheses from each lattice along with the acoustic and trigram scoresgiven the idealized circumstances of the production the lattices are relatively sparse and in many cases 50 distinct string hypotheses were not found in a latticewe reranked an average of 229 hypotheses with our language model per utteranceone complicating issue has to do with the tokenization in the penn treebank versus that in the hub1 latticesin particular contractions but not in the hub1 latticessplitting of the contractions is critical for parsing since the two parts oftentimes fall in different constituentswe follow chelba in dealing with this problem for parsing purposes we use the penn treebank tokenization for interpolation with the provided trigram model and for evaluation the lattice tokenization is usedif we are to interpolate our model with the lattice trigram we must wait until we have our model estimate for the probability of both parts of the contraction their product can then be interpolated with the trigram estimatein fact interpolation in these trials made no improvement over the better of the uninterpolated models but simply resulted in performance somewhere between the better and the worse of the two models so we will not present interpolated trials heretable 5 reports the word and sentence error rates for five different models the trigram model that comes with the lattices trained on approximately 40m words with a vocabulary of 20000 the bestperforming model from chelba which was interpolated with the lattice trigram at a 04 our parsing model with the same training and vocabulary as the perplexity trials above a trigram model with the same training and vocabulary as the parsing model and no language model at allthis last model shows the performance from the acoustic model alone without the influence of the language modelthe log of the language model score is multiplied by the language model weight when summing the logs of the language and acoustic scores as a way of increasing the relative contribution of the language model to the composite scorewe followed chelba in using an lm weight of 16 for the lattice trigramfor our model and the treebank trigram model the lm weight that resulted in the lowest error rates is giventhe small size of our training data as well as the fact that we are rescoring nbest lists rather than working directly on lattices makes comparison with the other models not particularly informativewhat is more informative is the difference between our model and the trigram trained on the same amount of datawe achieved an 85 percent relative improvement in word error rate and an 83 percent relative improvement in sentence error rate over the treebank trigraminterestingly as mentioned above interpolating two models together gave no improvement over the better of the two whether our model was interpolated with the lattice or the treebank trigramthis contrasts with our perplexity results reported above as well as with the recognition experiments in chelba where the best results resulted from interpolated modelsthe point of this small experiment was to see if our parsing model could provide useful information even in the case that recognition errors occur as opposed to the fully grammatical strings upon which the perplexity results were obtainedas one reviewer pointed out given that our model relies so heavily on context it may have difficulty recovering from even one recognition error perhaps more difficulty than a more locally oriented trigramwhile the improvements over the trigram model in these trials are modest they do indicate that our model is robust enough to provide good information even in the face of noisy inputfuture work will include more substantial word recognition experimentsthe last set of results that we will present addresses the question of how wide the beam must be for adequate resultsthe base beam factor that we have used to this point is 10 which is quite wideit was selected with the goal of high parser accuracy but in this new domain parser accuracy is a secondary measure of performanceto determine the effect on perplexity we varied the base beam factor in trials on the chelba and jelinek corpora keeping the level of conditioning information constant and table 6 shows the results across a variety of factorsthe parser error parser coverage and the uninterpolated model perplexity all suffered substantially from a narrower search but the interpolated perplexity remained quite good even at the extremesfigure 8 plots the percentage increase in parser error model perplexity interpolated perplexity and efficiency as the base beam factor decreasednote that the model perplexity and parser accuracy are quite similarly affected but that the interpolated perplexity remained far below the trigram baseline even with extremely narrow beamsthe empirical results presented above are quite encouraging and the potential of this kind of approach both for parsing and language modeling seems very promisingincrease in average precisionrecall error model perplexity interpolated perplexity and efficiency as base beam factor decreaseswith a simple conditional probability model and simple statistical search heuristics we were able to find very accurate parses efficiently and as a side effect were able to assign word probabilities that yield a perplexity improvement over previous resultsthese perplexity improvements are particularly promising because the parser is providing information that is in some sense orthogonal to the information provided by a trigram model as evidenced by the robust improvements to the baseline trigram when the two models are interpolatedthere are several important future directions that will be taken in this areafirst there is reason to believe that some of the conditioning information is not uniformly useful and we would benefit from finer distinctionsfor example the probability of a preposition is presumably more dependent on a ccommanding head than the probability of a determiner isyet in the current model they are both conditioned on that head as leftmost constituents of their respective phrasessecond there are advantages to topdown parsing that have not been examined to date eg empty categoriesa topdown parser in contrast to a standard bottomup chart parser has enough information to predict empty categories only where they are likely to occurby including these nodes we may be able to bring certain longdistance dependencies into a local focusin addition as mentioned above we would like to further test our language model in speech recognition tasks to see if the perplexity improvement that we have seen can lead to significant reductions in word error rateother parsing approaches might also be used in the way that we have used a topdown parserearley and leftcorner parsers as mentioned in the introduction also have rooted derivations that can be used to calculated generative string prefix probabilities incrementallyin fact leftcorner parsing can be simulated by a topdown parser by transforming the grammar as was done in roark and johnson and so an approach very similar to the one outlined here could be used in that caseperhaps some compromise between the fully connected structures and extreme underspecification will yield an efficiency improvementalso the advantages of headdriven parsers may outweigh their inability to interpolate with a trigram and lead to better offline language models than those that we have presented heredoes a parsing model capture exactly what we need for informed language modelingthe answer to that is nosome information is simply not structural in nature and we might expect other kinds of models to be able to better handle nonstructural dependenciesthe improvement that we derived from interpolating the different models above indicates that using multiple models may be the most fruitful path in the futurein any case a parsing model of the sort that we have presented here should be viewed as an important potential source of key information for speech recognitionfuture research will show if this early promise can be fully realizedthe author wishes to thank mark johnson for invaluable discussion guidance and moral support over the course of this projectmany thanks also to eugene charniak for the use of certain grammar training routines and for an enthusiastic interest in the projectthanks also to four anonymous reviewers for valuable and insightful comments and to ciprian chelba sanjeev khudanpur and frederick jelinek for comments and suggestionsfinally the author would like to express his appreciation to the participants of discussions during meetings of the brown
J01-2004
probabilistic topdown parsing and language modelingthis paper describes the functioning of a broadcoverage probabilistic topdown parser and its application to the problem of language modeling for speech recognitionthe paper first introduces key notions in language modeling and probabilistic parsing and briefly reviews some previous approaches to using syntactic structure for language modelinga lexicalized probabilistic topdown parser is then presented which performs very well in terms of both the accuracy of returned parses and the efficiency with which they are found relative to the best broadcoverage statistical parsersa new language model that utilizes probabilistic topdown parsing is then outlined and empirical results show that it improves upon previous work in test corpus perplexityinterpolation with a trigram model yields an exceptional improvement relative to the improvement observed by other models demonstrating the degree to which the information captured by our parsing model is orthogonal to that captured by a trigram modela small recognition experiment also demonstrates the utility of the modelour parser works lefttoright through the sentence and abjures dynamic programming in favor of a beam search keeping some large number of possibilities to extend by adding the next word and then repruningat each word in the string our topdown parser provides access to the weighted set of partial analyses in the beam
the interaction of knowledge sources in word sense disambiguation word sense disambiguation is a computational linguistics task likely to benefit from the tradition of combining different knowledge sources in artificial intelligence research an important step in the exploration of this hypothesis is to determine which linguistic knowledge sources are most useful and whether their combination leads to improved results we present a sense tagger which uses several knowledge sources tested accuracy exceeds 94 on our evaluation corpus our system attempts to disambiguate all content words in running text rather than limiting itself to treating a restricted vocabulary of words it is argued that this approach is more likely to assist the creation of practical systems word sense disambiguation is a computational linguistics task likely to benefit from the tradition of combining different knowledge sources in artificial intelligence researchan important step in the exploration of this hypothesis is to determine which linguistic knowledge sources are most useful and whether their combination leads to improved resultswe present a sense tagger which uses several knowledge sourcestested accuracy exceeds 94 on our evaluation corpusour system attempts to disambiguate all content words in running text rather than limiting itself to treating a restricted vocabulary of wordsit is argued that this approach is more likely to assist the creation of practical systemsword sense disambiguation is a problem long recognised in computational linguistics and there has been a recent resurgence of interest including a special issue of this journal devoted to the topic despite this there is still a considerable diversity of methods employed by researchers as well as differences in the definition of the problems to be tackledthe senseval evaluation framework was a darpastyle competition designed to bring some conformity to the field of wsd although it has yet to achieve that aim completelythe main sources of divergence are the choice of computational paradigm the proportion of text words disambiguated the granularity of the meanings assigned to them and the knowledge sources usedwe will discuss each in turnresnik and yarowsky noted that for the most part partofspeech tagging is tackled using the noisy channel model although transformation rules and grammaticostatistical methods have also had some successthere has been far less consensus as to the best approach to wsdcurrently machine learning methods and combinations of classifiers have been popularthis paper reports a wsd system employing elements of both approachesanother source of difference in approach is the proportion of the vocabulary disambiguatedsome researchers have concentrated on producing wsd systems that base results on a limited number of words for example yarowsky and schtitze who quoted results for 12 words and a second group including leacock towell and voorhees and bruce and wiebe who gave results for just one namely interestbut limiting the vocabulary on which a system is evaluated can have two serious drawbacksfirst the words used were not chosen by frequencybased sampling techniques and so we have no way of knowing whether or not they are special cases a point emphasised by kilgarriff secondly there is no guarantee that the techniques employed will be applicable when a larger vocabulary is tackledhowever it is likely that markup for a restricted vocabulary can be carried out more rapidly since the subject has to learn the possible senses of fewer wordsamong the researchers mentioned above one must distinguish between on the one hand supervised approaches that are inherently limited in performance to the words over which they evaluate because of limited training data and on the other hand approaches whose unsupervised learning methodology is applied to only small numbers of words for evaluation but which could in principle have been used to tag all content words in a textothers such as harley and glennon and ourselves wilks and stevenson have concentrated on approaches that disambiguate all content wordsin addition to avoiding the problems inherent in restricted vocabulary systems wide coverage systems are more likely to be useful for nlp applications as discussed by wilks et al a third difference concerns the granularity of wsd attempted which one can illustrate in terms of the two levels of semantic distinctions found in many dictionaries homograph and sense like cowie guthrie and guthrie we shall give results at both levels but it is worth pointing out that the targets of say work using translation equivalents and roget categories correspond broadly to the wider homograph distinctionsin this paper we attempt to show that the high level of results more typical of systems trained on many instances of a restricted vocabulary can also be obtained by large vocabulary systems and that the best results are to be obtained from an optimization of a combination of types of lexical knowledge syntactic semantic and pragmatic information are all potentially useful for wsd as can be demonstrated by considering the following sentences the first two sentences contain the ambiguous word well as an adjective in where it is used in its quotstate of healthquot sense and as a noun in meaning quotwater supplyquotsince the two usages are different parts of speech they can be disambiguated by this syntactic propertysentence contains the word bat whose nominal readings are ambiguous between the quotcreaturequot and quotsports equipmentquot meaningspartofspeech information cannot disambiguate the senses since both are nominal usageshowever this sentence can be disambiguated using semantic information such as preference restrictionsthe verb sleep prefers an animate subject and only the quotcreaturequot sense of bat is animateso sentence can be effectively disambiguated by its semantic behaviour but not by its syntaxa preference restriction will not disambiguate sentence since the direct object preference will be at least as general as physical object and any restriction on the direct object slot of the verb sell would cover both sensesthe sentence can be disambiguated on pragmatic grounds because it is far more likely that sports equipment will be bought in a sports shopthus pragmatic information can be used to disambiguate bat to its quotsports equipmentquot senseeach of these knowledge sources has been used for wsd and in section 3 we describe a method which performs roughgrained disambiguation using partofspeech informationwilks describes a system which performs wsd using semantic information in the form of preference restrictionslesk also used semantic information for wsd in the form of textual definitions from dictionariespragmatic information was used by yarowsky whose approach relied upon statistical models of categories from roget thesaurus a resource that had been used in much earlier approaches to wsd such as masterman the remainder of this paper is organised as follows section 2 reviews some systems which have combined knowledge sources for wsdin section 3 we discuss the relationship between semantic disambiguation and partofspeech tagging reporting an experiment which quantifies the connectiona general wsd system is presented in section 4in section 5 we explain the strategy used to evaluate this system and we report the results in section 6a comprehensive review of wsd is beyond the scope of this paper but may be found in ide and veronis combining knowledge sources for wsd is not a new idea in this section we will review some of the systems which have tried to do thatearly work on coarsegrained wsd based on combining knowledge sources was undertaken by mcroy her work was carried out without the use of machinereadable dictionaries necessitating the manual creation of the complex set of lexicons this system requiresthere was a lexicon of 8775 unique roots a hierarchy of 1000 concepts and a set of 1400 collocational patternsthe collocational patterns are automatically extracted from a corpus of text in the same domain as the text being disambiguated and senses are manually assigned to eachif the collocation occurs in the text being disambiguated then it is assumed that the words it contains are being used in the same senses as were assigned manuallydisambiguation makes use of several knowledge sources frequency information syntactic tags morphological information semantic context collocations and word associations rolerelated expectations and selectional restrictionsthe knowledge sources are combined by adding their resultseach knowledge source assigns a numeric value to each of the possible sensesthe numerical value depends upon the type of knowledge sourcesome knowledge sources have only two possible values for example the frequency information has one value for frequent senses and another for infrequent onesthe numerical values assigned for each were determined manuallythe selectional restrictions knowledge source assigns scores in the range 10 to 10 with higher scores being assigned to senses that are more specific disambiguation is carried out by summing the scores from each knowledge source for all candidate senses and choosing the one with the highest overall scorein a sample of 25000 words from the wall street journal the system covered 98 of wordoccurrences that were not proper nouns and were not abbreviated demonstrating the impressive coverage of the handcrafted lexiconsno quantitative evaluation of the disambiguation quality was carried out due to the difficulty in obtaining annotated test data a problem made more acute by the use of a custombuilt lexiconin addition comparison of system output against manually annotated text had yet to become a standard evaluation strategy in wsd researchthe cambridge international dictionary of english is a learners dictionary which consists of definitions written using a 2000 word controlled vocabularythe senses in cide are grouped by guidewords similar to homographs in ldoceit was produced using a large corpus of english created by the cambridge language survey the cls also produced a semantic tagger a commercial product that tags words in text with senses from their mrdthe tagger consists of four subtaggers running in parallel with their results being combined after all have runthe first tagger uses collocations derived from the cide example sentencesthe second examines the subject codes for all words in a particular sentence and the number of matches with other words is calculateda partofspeech tagger produced inhouse by cup is run over the text and high scores are assigned to senses that agree with the syntactic tag assignedfinally the selectional restrictions of verbs and adjectives are examinedthe results of these processes are combined using a simple weighting scheme this weighting scheme inspired by those used in computer chess programs assigns each subprocess a weight in the range 100 to 100 before summingunlike mcroy this approach does not consider the specificity of a knowledge source in a particular instance but always assigns the same overall weight to eachharley and glennon report 78 correct tagging of all content words at the cide guideword level and 73 at the subsense level as compared to a handtagged corpus of 4000 wordsan early application of machine learning to the wsd problem was carried out by brown et al several different disambiguation cues such as first noun to the leftright and second word to the leftright were extracted from parallel texttranslation differences were used to define the senses as this approach was used in an englishfrench machine translation systemthe parallel text effectively provided supervised training examples for this algorithmnadas et al used the flipflop algorithm to decide which of the cues was most important for each word by maximizing mutual information scores between wordsyarowsky used an extremely rich features set by expanding this set with syntactic relations such as subjectverb verbobject and adjectivenoun relations partofspeech ngrams and othersthe approach was based on the hypothesis that words exhibited quotone sense per collocationquot a large corpus was examined to compute the probability of a particular collocate occurring with a certain sense and the discriminatory power of each was calculated using the loglikelihood ratiothese ratios were used to create a decision list with the most discriminating collocations being preferredthis approach has the benefit that it does not combine the probabilities of the collocates which are highly nonindependent knowledge sourcesyarowsky also examined the discriminatory power of the individual knowledge sourcesit was found that each collocation indicated a particular sense with a very high degree of reliability with the most successfulthe first word to the left of a nounachieving 99 precisionyet collocates have limited applicability although precise they can only be applied to a limited number of tokensyarowsky dealt with this problem largely by producing an unsupervised learning algorithm that generates probabilistic decision list models of word senses from seed collocatesthis algorithm achieves 97 correct disambiguationin these experiments yarowsky deals exclusively with binary sense distinctions and evaluates his highly effective algorithms on small samples of word tokensng and lee explored an approach to wsd in which a word is assigned the sense of the most similar example already seenthey describe this approach as quotexemplarbased learningquot although it is also known as knearest neighbor learningtheir system is known as lexas a supervised learning approach which requires disambiguated training textlexas was based on pebls a publically available exemplarbased learning algorithma set of features is extracted from disambiguated example sentences including partofspeech information morphological form surrounding words local collocates and words in verbobject syntactic relationswhen a new untagged usage is encountered it is compared with each of the training examples and the distance from each is calculated using a metric adopted from cost and salzberg this is calculated as the sum of the differences between each pair of features in the two vectorsthe differences between two values vi and 02 is calculated according to where cl represents the number of training examples with value 01 that are classified with sense i in the training corpus and ci the number with value 01 in any sensec2i and c2 denote similar values and n denotes the total number of senses for the word under considerationthe sense of the example with the minimum distance from the untagged usage is chosen if there is more than one with the same distance one is chosen at random to break the tieng and lee tested lexas on two separate data sets one used previously in wsd research the other a new manually tagged corpusthe common data set was the interest corpus constructed by bruce and wiebe consisting of 2639 sentences from the wall street journal each containing an occurrence of the noun interesteach occurrence is tagged with one of its six possible senses from ldoceevaluation is carried out through 100 random trials each trained on 1769 sentences and tested on the 600 remaining sentencesthe average accuracy was 874 significantly higher than the figure of 78 reported by bruce and wiebefurther evaluation was carried out on a larger data set constructed by ng and leethis consisted of 192800 occurrences of the 121 nouns and 70 verbs that are quotthe most frequently occurring and ambiguous words in englishquot the corpus was made up from the brown corpus and the wall street journal corpus and was tagged with the correct senses from wordnet by university undergraduates specializing in linguisticsbefore training two subsets of the corpus were put aside as test sets the first contains 7119 occurrences of the ambiguous words from the brown corpus while the second contained 14139 from the wall street journal corpuslexas correctly disambiguated 54 of words in bc50 and 686 in wsj6ng and lee point out that both results are higher than choosing the first or most frequent sense in each of the corporathe authors attribute the lower performance on the brown corpus to the wider variety of text types it containsng and lee attempted to determine the relative contribution of each knowledge sourcethis was carried out by rerunning the data from the quotinterestquot corpus through the learning algorithm this time removing all but one set of featuresthe results are shown in table 1they found that the local collocations were the most useful knowledge source in their systemhowever it must be remembered that this experiment was carried out on a data set consisting of a single word and may therefore not be generalizablethis review has been extremely brief and has not covered large areas of research into wsdfor example we have not discussed connectionist approaches as used by waltz and pollack veronis and ide hirst and cottrell however we have attempted to discuss some of the approaches to combining diverse types of linguistic knowledge for wsd and have concentrated on those which are related to the techniques used in our own disambiguation systemof central interest to our research is the relative contribution of the various knowledge sources which have been applied to the wsd problemboth ng and lee and yarowsky reported some results in the areahowever ng and lee reported results for only a single word and yarowsky considers only words with two possible sensesthis paper is an attempt to increase the scope of this research by discussing a disambiguation algorithm which operates over all content words and combines a varied set of linguistic knowledge sourcesin addition we examine the relative effect of each knowledge source to gauge which are the most important and under what circumstanceswe first report an indepth study of a particular knowledge source namely partofspeech tagsthe experiments described in this section use the longman dictionary of contemporary english ldoce is a learners dictionary designed for students of english containing roughly 36000 word typesldoce was innovative in its use of a defining vocabulary of 2000 words with which the definitions were writtenif a learner of english could master this small core then it was assumed they could understand every entry in the dictionaryin ldoce the senses for each word type are grouped into homographs sets of senses with related meaningsfor example one of the homographs of bank means stevenson and wilks interaction of knowledge sources in wsd bankl n 1 land along the side of a river lake etc2 earth which is heaped up in a field or a garden often making a border or division 3 a mass of snow mud clouds etc the banks of dark cloud promised a heavy storm 4 a slope made at bends in a road or racetrack so that they are safer for cars to go round 5 sandbank the dogger bank in the north sea can be dangerous for ships b ank2 v i0 to move with one side higher than the other esp when making a turn see also bank up barik3 n 1 a row esp of oars in an ancient boat or keys on a typewriter bank4 n 1 a place where money is kept and paid out on demand and where related activities go on see picture at street 2 a place where something is held ready for use esporganic product of human origin for medical use hospital bloodbanks have saved many lives 3 a supply of money or pieces for payment or use in a game of chance 4 break the bank to win all the money that the bank4 has in a game of chance bank5 v 1t1 to put or keep in a bank 2l9 esp with to keep one money where do you bankthe entry for bank in ldoce roughly quotthings piled upquot with different senses distinguishing exactly what is piled if the senses are sufficiently close together in meaning there will be only one homograph for that word which we then call monohomographichowever if the senses are far enough apart as in the bank case they will be grouped into separate homographs which we call polyhomographicas can be seen from the example entry each ldoce homograph includes information about the part of speech with which the homograph is marked and that applies to each of the senses within that homographthe vast majority of homographs in ldoce are marked with a single part of speech however about 2 of word types in the dictionary contain a homograph that is marked with more than one part of speech meaning that either part of speech may applyalthough the granularity of the distinction between homographs in ldoce is rather coarsegrained they are as we noted at the beginning of this paper an appropriate level for many practical computational linguistic applicationsfor example bank in the sense of quotfinancial institutionquot translates to ban que in french but when used in the quotedge of riverquot sense it translates as bordthis level of semantic disambiguation is frequently sufficient for choosing the correct target word in an englishtofrench machine translation system and is at a similar level of granularity to the sense distinctions explored by other researchers in wsd for example brown et al yarowsky and mcroy we began by examining the potential usefulness of partofspeech information for sense resolutionit was found that 34 of the contentword types in ldoce were polysemous and 12 polyhomographicif we assume that the part of speech of each polyhomographic word in context is known then 88 of word types would be disambiguated to the homograph levelsome words will be disambiguated to the homograph level if they are used in a certain part of speech but not othersfor example beam has 3 homographs in ldoce the first two are marked as nouns while the third is marked as verbthis word would be disambiguated if used as a verb but not if used as a nounif we assume that every word of this type is assigned a part of speech which disambiguates it then an additional 7 of words in ldoce could potentially be disambiguatedtherefore up to 95 of word types in ldoce can be disambiguated to the homograph level by partofspeech information alonehowever these figures do not take into account either errors in partofspeech tagging or the corpus distribution of tokens since each word type is counted exactly oncethe next stage in our analysis was to attempt to disambiguate some texts using the information obtained from partofspeech tagswe took five articles from the wall street journal containing 391 polyhomographic content wordsthese articles were manually tagged with the most appropriate ldoce homograph by one of the authorsthe texts were then partofspeech tagged using brill transformationbased learning tagger the tags assigned by the brill tagger were manually mapped onto the simpler partofspeech tag set used in ldoce2 if a word has more than one homograph with the same part of speech then partofspeech tags alone cannot always identify a single homograph in such cases we chose the first sense listed in ldoce since this is the one which occurs most frequently3 it was found that 874 of the polyhomographic content words were assigned the correct homographa baseline for this task can be calculated by computing the number of tokens that would be correctly disambiguated if the first homograph for each was chosen regardless of part of speech78 of polyhomographic tokens were correctly disambiguated this way using this approachthese results show there is a clear advantage to be gained by using the very simple partofspeechbased method described compared with simply choosing the first homographhowever we felt that it would be useful to carry out some further analysis of the datato do this it is useful to divide the polyhomographic words into four classes all based on the assumption that a partofspeech tagger has been run over the text and that homographs which do not correspond to the grammatical category assigned have been removedfull disambiguation if only a single homograph with the correct part of speech remains that word has been fully disambiguated by the taggerstevenson and wilks interaction of knowledge sources in wsd partial disambiguation if there is more than one possible homograph with the correct part of speech but some have been removed from consideration that word has been partially disambiguated by part of speechno disambiguation if all the homographs of a word have the same part of speech which is then assigned by the tagger then none can be removed and no disambiguation has been carried outpartofspeech error it is possible for the partofspeech tagger to assign an incorrect part of speech leading to the correct homograph being removed from considerationit is worth mentioning that this situation has two possible outcomes first some homographs with incorrect parts of speech may remain or second all homographs may have been removed from considerationin table 3 we show in the column labelled count the number of words in our five articles which fall into each of the four categoriesthe relative performance of the baseline method compared to the reported algorithm are shown in the rightmost two columnsthe figures in brackets indicate the percentage of polyhomographic words correctly disambiguated by each method on a perclass basisit can be seen that the majority of the polyhomographic words fall into the quotfull disambiguationquot category all of which are correctly disambiguated by the method reported herewhen no disambiguation is carried out the algorithm described simply chooses the first sense and so the results are the same for both methodsthe only condition under which choosing the first sense is more effective than using partofspeech information is when the partofspeech tagger makes an error and all the homographs with the correct part of speech are removed from considerationin most cases this means that the correct homograph cannot be chosen however in a small number of cases this is equivalent to choosing the most frequent sense since if all possible homographs have been removed from consideration the algorithm reverts to using the simpler heuristic of choosing the word first homograph4 although this result may seem intuitively obvious there have we believe been no other attempts to quantify the benefit to be gained from the application of a partofspeech tagger in wsd the method described here is effective in removing incorrect senses from consideration thereby reducing the search space if combined with other wsd methodsin the experiments reported in this section we made use of the particular structure of ldoce which assigns each sense to a homograph from which its part of speech information is inheritedhowever there is no reason to believe that the method reported here is limited to lexicons with this structurein fact this approach can be applied to any lexicon which assigns partofspeech information to senses although it would not always be possible to evaluate at the homograph level as we do herein the remainder of this paper we go on to describe a sense tagger that assigns senses from ldoce using a combination of classifiersthe set of senses considered by the classifiers is first filtered using partofspeech tagswe adopt a framework in which different knowledge sources are applied as separate modulesone type of module a filter can be used to remove senses from consideration when a knowledge source identifies them as unlikely in contextanother type can be used when a knowledge source provides evidence for a sense but cannot identify it confidently we call these partial taggers the choice of whether to apply a knowledge source as either a filter or a partial tagger depends on whether it is likely to rule out correct sensesif a knowledge source is unlikely to reject the correct sense then it can be safely implemented as a filter otherwise implementation as a partial tagger would be more appropriatein addition it is necessary to represent the context of ambiguous words so that this information can be used in the disambiguation processin the system described here these modules are referred to as feature extractorsour sense tagger is implemented within this modular architecture one where each module is a filter partial tagger or feature extractorthe architecture of the system is represented in figure 2this system currently incorporates a single filter three partial taggers and a single feature extractor sense tagger architecturebefore the filters or partial taggers are applied the text is tokenized lemmatized split into sentences and partofspeech tagged again using brill taggera named entity identifier is then run over the text to mark and categorize proper names which will provide information for the selectional restrictions partial tagger these preprocessing stages are carried out by modules from sheffield university information extraction system lasie and are described in more detail by gaizauskas et al our system disambiguates only the content words in the text and the partofspeech tags are used to decide which are content wordsthere is no attempt to disambiguate any of the words identified as part of a named entitythese are excluded because they have already been analyzed semantically by means of the classification added by the named entity identifier another reason for not attempting wsd on named entities is that when words are used as names they are not being used in any of the senses listed in a dictionaryfor example rose and may are names but there are no senses in ldoce for this usageit may be possible to create a dummy entry in the set of ldoce senses indicating that the word is being used as a name but then the sense tagger would simply repeat work carried out by the named entity identifierwe take the partofspeech tags assigned by the brill tagger and use a manually created mapping to translate these to the corresponding ldoce grammatical category any senses which do not correspond to the category returned are removed from considerationin practice the filtering is carried out at the same time as the lexical lookup phase and the senses whose grammatical categories do not correspond to the tag assigned are never attached to the ambiguous wordthere is also an option of turning off filtering so that all senses are attached regardless of the partofspeech tagif none of the dictionary senses for a given word agree with the partofspeech tag then all are keptit could be reasonably argued that removing senses is a dangerous strategy since if the partofspeech tagger made an error the correct sense could be removed from considerationhowever the experiments described in section 32 indicate that partofspeech information is unlikely to reject the correct sense and can be safely implemented as a filterlesk proposed that wsd could be carried out using an overlap count of content words in dictionary definitions as a measure of semantic closenessthis method would tag all content words in a sentence with their senses from a dictionary that contains textual definitionshowever it was found that the computations which would be necessary to test every combination of senses even for a sentence of modest length was prohibitivethe approach was made practical by cowie guthrie and guthrie rather than computing the overlap for all possible combinations of senses an approximate solution is identified by the simulated annealing optimization algorithm although this algorithm is not guaranteed to find the global solution to an optimization problem it has been shown to find solutions that are not significantly different from the optimal one cowie et al used ldoce for their implementation and found it correctly disambiguated 47 of words to the sense level and 72 to the homograph level bruce and guthrie hierarchy of ldoce semantic codes when compared with manually assigned sensesthe optimization must be carried out relative to a function that evaluates the suitability of a particular choice of sensesin the cowie et al implementation this was done using a simple count of the number of words in common between all the definitions for a given choice of senseshowever this method prefers longer definitions since they have more words that can contribute to the overlap and short definitions or definitions by synonym are correspondingly penalizedwe addressed this problem by computing the overlap in a different way instead of each word contributing one we normalized its contribution by the number of words in the definition it came fromin their implementation cowie et al also added pragmatic codes to the overlap computation however we prefer to keep different knowledge sources separate and use this information in another partial tagger the cowie et al implementation returned one sense for each ambiguous word in the sentence without any indication of the system confidence in its choice but we adapted the system to return a set of suggested senses for each ambiguous word in the sentenceour next partial tagger returns the set of senses for each word that is licensed by selectional preferences ldoce senses are marked with selectional restrictions expressed by 36 semantic codes not ordered in a hierarchyhowever the codes are clearly not of equal levels of generality for example the code h is used to represent all humans while m represents human malesthus for a restriction with type h we would want to allow words with the more specific semantic class m to meet itthis can be computed if the semantic categories are organized into a hierarchythen all categories subsumed by another category will be regarded as satisfying the restrictionbruce and guthrie manually identified relations between the ldoce semantic classes grouping the codes into small sets with roughly the same meaning and attached descriptions for example m k are grouped as a pair described as quothuman malequotthe hierarchy produced is shown in figure 3the named entities identified as part of the preprocessing phase are used by this module which requires first a mapping between the name types and ldoce semantic codes shown in table 4any use of preferences for sense selection requires prior identification of the site in the sentence where such a relationship holdsalthough prior identification was not done by syntactic methods in wilks it is often easiest to think of the relationships as specified in grammatical terms eg as subjectverb verbobject adjectivenoun etcwe perform this step by means of a shallow syntactic analyzer which finds the following grammatical relations the subject direct and indirect object of each verb and the noun modified by an adjectivestevenson describes an evaluation of this system in which the relations identified were compared with those derived from penn treebank parses it was found that the parser achieved 51 precision and 69 recallthe preference resolution algorithm begins by examining a verb and the nouns it dominateseach sense of the verb applies a preference to those nouns such that some of their senses may be disallowedsome verb senses will disallow all senses for a particular noun it dominates and these senses of the verb are immediately rejectedthis process leaves us with a set of verb senses that do not conflict with the nouns that verb governs and a set of noun senses licensed by at least one of those verb sensesfor each noun we then check whether it is modified by an adjectiveif it is we reject any senses of the adjectives which do not agree with any of the remaining noun sensesthis approach is rather conservative in that it does not reject a sense unless it is impossible for it to fit into the preference pattern of the sentencein order to explain this process more fully we provide a walkthrough explanation of the procedure applied to a toy example shown in table 5it is assumed that the namedentity identifier has correctly identified john as a person and that the shallow parser has found the correct syntactic relationsin order to make this example as straightforward as possible we consider only the case in which the ambiguous words have few sensesthe disambiguation process operates by considering the relations between the words in known grammatical relations and before it begins we have essentially a set of possible senses for each word related via their syntaxthis situation is represented by the topmost tree in figure 4disambiguation is carried out by considering each verb sense in turn beginning with runas run is being used transitively it places two restrictions on the sentence first the subject must satisfy the restriction human and the object abstractin this example john has been identified as a named entity and marked as human so the subject restriction is not brokennote that if the restriction were broken then the verb sense run would be marked as incorrect by this partial tagger and no further attempt would be made to resolve its restrictionsas this was not the case we consider the directobject slot which places the restriction abstract on the noun which fills it course fulfils this criterion course is modified by hilly which expects a noun of type nonmovable solidhowever course is marked abstract which does not comply with this restrictiontherefore assuming that run is being used in its second sense leads to a situation in which there is no set of senses which comply with all the restrictions placed on them therefore run is not the correct sense of run and the partial tagger marks this sense as wrongthis situation is represented by the tree at the bottom left of figure 4the sense course is not rejected at this point since it may be found to be acceptable in the configuration of senses of another sense of runthe algorithm now assumes that run is the correct sensethis implies that course is the correct sense as it complies with the inanimate restriction that that verb sense places on the direct objectas well as complying with the restriction imposed by run course also complies with the one imposed by hilly since nonmovable solid is subsumed by inanimatetherefore assuming that the senses run and course are being used does not lead to any restrictions being broken and the algorithm marks these as correctbefore leaving this example it is worth discussing a few additional pointsthe sense course is marked as incorrect because there is no sense of run with which an interpretation of the sentence can be constructed using courseif there were further senses of run in our example and course was found to be suitable for those extra senses then the algorithm would mark the second sense of course as correctthere is however no condition under which run could be considered as correct through the consideration of further verb sensesalso although john and hilly are not ambiguous in this example they still participate in the disambiguation processin fact they are vital to its success as the correct senses could not have been identified without considering the restrictions placed by the adjective hillythis partial tagger returns for all ambiguous noun verb and adjective occurrences in the text the set of senses which satisfy the preferences imposed on those wordsadverbs do not have any selectional preferences in ldoce and so are ignored by this partial taggerour final partial tagger is a reimplementation of the algorithm developed by yarowsky this algorithm is dependent upon a categorization of words in the lexicon into subject areasyarowsky used the roget large categoriesin ldoce primary pragmatic codes indicate the general topic of a text in which a sense is likely to be usedfor example ln means quotlinguistics and grammarquot and this code is assigned to some senses of words such as quotellipsisquot quotablativequot quotbilingualquot and quotintransitivequotroget is a thesaurus so each entry in the lexicon belongs to one of the large categories but over half of the senses in ldoce are not assigned a primary codewe therefore created a dummy category denoted by used to indicate a sense which is not associated with any specific subject area and this category is assigned to all senses without a primary pragmatic codethese differences between the structures of ldoce and roget meant that we had to adapt the original algorithm reported in yarowsky in yarowsky implementation the correct subject category is estimated by applying which maximizes the sum of a bayesian term over all possible subject categories for the ambiguous word over the words in its context a context of 50 words on either side of the ambiguous word is usedyarowsky assumed the prior probability of each subject category to be constant so the value pr has no effect on the maximization in and was in effect being maximizedby including a general pragmatic code to deal with the lack of coverage we created an extremely skewed distribution of codes across senses and yarowsky assumption that subject codes occur with equal probability is unlikely to be useful in this applicationwe gained a rough estimate of the probability of each subject category by determining the proportion of senses in ldoce to which it was assigned and applying the maximum likelihood estimateit was found that results improved when therough estimate of the likelihood of pragmatic codes was usedthis procedure generates estimates based on counts of types and it is possible that this estimate could be improved by counting tokens although the problem of polysemy in the training data would have to be overcome in some waythe algorithm relies upon the calculation of probabilities gained from corpus statistics yarowsky used the grolier encyclopaedia which comprised a 10 million word corpusour implementation used nearly 14 million words from the nondialogue portion of the british national corpus yarowsky used smoothing procedures to compensate for data sparseness in the training corpus which we did not implementinstead we attempted to avoid this problem by considering only words which appeared at least 10 times in the training contexts of a particular worda context model is created for each pragmatic code by examining 50 words on either side of any word in the corpus containing a sense marked with that codedisambiguation is carried out by examining the same 100 word context window for an ambiguous word and comparing it against the models for each of its possible categoriesfurther details may be found in yarowsky yarowsky reports 92 correct disambiguation over 12 test words with an average of three possible roget large categorieshowever ldoce has a higher level of average ambiguity and does not contain as complete a thesaural hierarchy as roget so we would not expect such good results when the algorithm is adapted to ldoceconsequently we implemented the approach as a partial taggerthe algorithm identifies the most likely pragmatic code and returns the set of senses which are marked with that codein ldoce several senses of a word may be marked with the same pragmatic code so this partial tagger may return more than one sense for an ambiguous wordthe final disambiguation module is the only featureextractor in our system and is based on collocationsa set of 10 collocates are extracted for each ambiguous word in the text first word to the left first word to the right second word to the left second word to the right first noun to the left first noun to the right first verb to the left first verb to the right first adjective to the left and first adjective to the rightsome of these types of collocation were also used by brown et al and yarowsky all collocates are searched for within the sentence which contains the ambiguous wordif some particular collocation does not exist for an ambiguous word for example if it is the first or last word in a sentence then a null value is stored insteadrather than storing the surface form of the cooccurrence morphological roots are stored instead as this allows for a smaller set of collocations helping to cope with data sparsenessthe surface form of the ambiguous word is also extracted from the text and storedthe extracted collocations and surface form combine to represent the context of each ambiguous wordthe results from the disambiguation modules are then presented to a machine learning algorithm to combine their resultsthe algorithm we chose was the timbl memorybased learning algorithm memorybased learning is another name for exemplarbased learning as employed by ng and lee the timbl algorithm has already been used for various nlp tasks including partofspeech tagging and ppattachment like pebls which formed the core of ng and lee lexas system timbl classifies new examples by comparing them against previously seen casesthe class of the most similar example is assignedat the heart of this approach is the distance metric a which computes the similarity between instances x and ythis measure is calculated using the weighted overlap metric shown in which calculates the total distance by computing the sum of the distance between each position in the feature vectorfrom we can see that timbl treats numeric and symbolic features differentlyfor numeric features the unweighted distance is computed as the difference between the values for that feature in each instance divided by the maximum possible distance computed over all pairs of instances in the databasefor symbolic features the unweighted distance is 0 if they are identical and 1 otherwisefor both numeric and symbolic features this distance is multiplied by the weight for the particular feature based on the gain ratio measure introduced by quinlan this is a measure of the difference in uncertainty between the situations with and without knowledge of the value of that feature as in where c is the set of classifications v ranges over all values of the feature i and h is the entropy of the class labelsprobabilities are estimated from frequency of occurrence in the training datathe numerator of this formula determines the knowledge about the distribution of classes that is added by knowing the value of feature ihowever this measure can overestimate the value of features with large numbers of possible valuesto compensate it is divided by h the entropy of the feature valuesword senses are presented to timbl in a featurevector representation with each sense which was not removed by the part of speech filter being represented by a separate vectorthe vectors are formed from the following pieces of information in order headword homograph number sense number rank of sense part of speech from lexicon output from the three partial taggers surface form of headword from the text the ten collocates and an indicator of whether the sense is appropriate or not in the context figure 5 shows the feature vectors generated for the word influence in the context shownthe final value in the feature vector shows whether the sense is correct or not in the particular contextwe can see that in this case there is one correct sense influence_l_la the definition of which is quotpower to gain an effect on the mind of example featurevector representation or get results from without asking or doing anythingquotfeatures 1019 are produced by the collocation extractor and these are identical since each vector is taken from the same contentfeatures 79 show the results of the partial taggersthe first is the output from simulated annealing the second the subj ect code and the third the select ional restrictionsall noun senses of influence share the same pragmatic code and consequently this partial tagger returns the same score for each sensea final point worth noting is that in ldoce influence has a verb sense which the partofspeech filter removed from consideration and consequently this sense is not included in the featurevector representationthe timbl algorithm is trained on tokens presented in this formatwhen disambiguating unannotated text the algorithm is applied to data presented in the same format without the classificationthe unclassified vectors are then compared with all the training examples and it is assigned the class of the closest onethe evaluation of wsd algorithms has recently become a muchstudied areagale church and yarowsky resnik and yarowsky and melamed and resnik each presented arguments for adopting various evaluation strategies with resnik and yarowsky proposal directly influencing the setup of senseval at the heart of their proposals is the ability of human subjects to mark up text with the phenomenon in question and evaluate the results of computationthis linguistic phenomenon has proved to be far more elusive and complex than many otherswe have discussed this at length elsewhere and will assume here that humans can mark up text for senses to a sufficient degreekilgarriff questioned the possibility of creating sensetagged texts claiming the task to be impossiblehowever it should be borne in mind that no alternative has yet been widely accepted and that kilgarriff himself used the markupandtest model for sensevalin the following discussion we compare the evaluation methodology adopted here with those proposed by othersthe standard evaluation procedure for wsd is to compare the output of the system against gold standard texts but these are very laborintensive to obtain lexical semantic markup is generally considered to be a more difficult and timeconsuming task than partofspeech markup rather than expend a vast amount of effort on manual tagging we decided to combine two existing resources semcor and sensus semcor is a 200000 word corpus with the content words manually tagged as part of the wordnet projectthe semantic tagging was carried out by trained lexicographers under disciplined conditions that attempted to keep tagging inconsistencies to a minimumsensus is a largescale ontology designed for machinetranslation and was itself produced by merging the ontological hierarchies of wordnet ldoce and the penman upper model from isito facilitate the merging of these three resources to produce sensus knight and luk were required to derive a mapping between the senses in the two lexical resourceswe used this mapping to translate the wordnettagged content words in semcor to ldoce tagsthe mapping of senses is not onetoone and some wordnet synsets are mapped onto two or three ldoce senses when wordnet does not distinguish between themthe mapping also contained significant gaps chiefly words and senses not in the translation schemesemcor contains 91808 words tagged with wordnet synsets 6071 of which are proper names which we ignored leaving 85737 words which could potentially be translatedthe translation contains only 36869 words tagged with ldoce senses however this is a reasonable size for an evaluation corpus for the task and it is several orders of magnitude larger than those used by other researchers working in large vocabulary wsd for example cowie guthrie and guthrie harley and glennon and mahesh et al this corpus was also constructed without the excessive cost of additional handtagging and does not introduce any of the inconsistencies that can occur with a poorly controlled tagging strategyresnik and yarowsky proposed to evaluate large vocabulary wsd systems by choosing a set of test words and providing annotated test and training examples for just these words allowing supervised and unsupervised algorithms to be tested on the same vocabularythis model was implemented in senseval however for the evaluation of the system presented here there would have been no benefit from using this strategy since it still involves the manual tagging of large amounts of data and this effort could be used to create a gold standard corpus in which all content words are disambiguatedit is possible that some computational techniques may evaluate well over a small vocabulary but may not work for a large set of words and the evaluation strategy proposed by resnik and yarowsky will not discriminate between these casesin our evaluation corpus the most frequent ambiguous type is have which appears 604 timesa large number of words occur only once and nearly 95 have 25 occurrences or lesstable 6 shows the distribution of ambiguous types by number of corpus tokensit is worth noting that as would be expected the observed distribution is highly zipfian differences in evaluation corpora makes comparison difficulthowever some idea of the difficulty of wsd can be gained by calculating properties of the evaluation corpusgale church and yarowsky suggest that the lowest level of performance which can be reasonably expected from a wsd system is that achieved by assigning the most likely sense in all casessince the first sense in ldoce is usually the most frequent we calculate this baseline figure using a heuristic which assumes the first sense is always correctthis is the same baseline heuristic we used for the experiments reported in section 3 although those were for the homograph levelwe applied the naive heuristic of always choosing the first sense in our corpus and found that 309 of senses were correctly disambiguatedanother measure that gives insight into an evaluation corpus is to count the average polysemy ie the number of possible senses we can expect for each ambiguous word in the corpusthe average polysemy is calculated by counting the sum of possible senses for each ambiguous token and dividing by the number of tokensthis is represented by where w ranges over all ambiguous tokens in the corpus s is the number of possible senses for word w and n is the number of ambiguous tokensthe average polysemy for our evaluation corpus is 1462ew in text s average polysemy our annotated corpus has the unusual property that more than one sense may be marked as correct for a particular tokenthis is an unavoidable sideeffect of a mapping between lexicon senses which is not onetoonehowever it does not imply that wsd is easier in this corpus than one in which only a single sense is marked for each token as can be shown from an imaginary examplethe worst case for a wsd algorithm is when each of the possible semantic tags for a given word occurs with equal frequency in a corpus and so the prior probabilities exhibit a uniform uninformative distributionthen a corpus with an average polysemy of 5 and 2 senses marked correct on each ambiguous token will have a baseline not less than 40however one with an average polysemy of 2 and only a single sense on each will have a baseline of at least 50test corpora in which each ambiguous token has exactly two senses were used by brown et al yarowsky and othersour system was tested using a technique known as 10fold cross validationthis process is carried out by splitting the available data into ten roughly equal subsetsone of the subsets is chosen as the test data and the timbl algorithm is trained on the remainderthis is repeated ten times so that each subset is used as test data exactly once and results are averaged across all of the test runsthis technique provides two advantages first the best use can be made of the available data and secondly the computed results are more statistically reliable than those obtained by simply setting aside a single portion of the data for testingthe choice of scoring metric is an important one in the evaluation of wsd algorithmsthe most commonly used metric is the ratio of words for which the system has assigned the correct sense compared to those which it attempted to disambiguateresnik and yarowsky dubbed this the exact match metric which is usually expressed as a percentage calculated according to the formula in number of correctly assigned senses exact match x 100 number of senses assigned resnik and yarowsky criticize this metric because it assumes a wsd system commits to a particular sensethey propose an alternative metric based on crossentropy that compares the probabilities for each sense as assigned by a wsd system against those in the gold standard textthe formula in shows the method for computing this metric where the wsd system has processed n words and pr is the probability assigned to the correct sense of word ithis evaluation metric may be useful for disambiguation systems that assign probabilities to each sense such as those developed by resnik and yarowsky since it provides more information than the exact match metrichowever for systems which simply choose a single sense and do not measure confidence it provides far less informationwhen a wsd assigns only one sense to a word and that sense is incorrect that word is scored as ooconsequently the formula in returns oo if there is at least one word in the test set for which the tagger assigns a zero probability to the correct sensefor wsd systems which assign exactly one sense to each word this metric returns 0 if all words are tagged correctly and oo otherwisethis metric is potentially very useful for the evaluation of wsd systems that return nonzero probabilities for each possible sense however it is not useful for the metric presented in this paper and others that are not based on probabilistic modelsmelamed and resnik propose a metric for scoring wsd output when there may be more than one correct sense in the gold standard text as with the evaluation corpus we usethey mention that when a wsd system returns more than one sense it is difficult to tell if they are intended to be disjunctive or conjunctivethe score for a token is computed by dividing the number of correct senses identified by the algorithm by the total it returns making the metric equivalent to precision in information retrieval 6 for systems which return exactly one sense for each word this equates to scoring a token as 1 if the sense returned is correct and 0 otherwisefor the evaluation of the system presented here the metric proposed by melamed and resnik is then equivalent to the exact match metricthe exact match metric has the advantage of being widely used in the wsd literaturein our experiments the exact match figure is computed at the ldoce sense level where the number of tokens correctly disambiguated to the sense level is divided by the number ambiguous at that levelat the homograph level the number correctly disambiguated to the homograph is divided by the number which are polyhomographicusing the evaluation procedure described in the previous section it was found that the system correctly disambiguated 90 of the ambiguous instances to the finegrained sense level and in excess of 94 to the homograph levelin order to analyze the effectiveness of our tagger in more detail we split the main corpus into subcorpora by grammatical categoryin other words we created four individual subcorpora containing the ambiguous words which had been partofspeech tagged as nouns verbs adjectives and adverbsthe figures characterizing each of these corpora are shown in table 7the majority of the ambiguous words were nouns with far fewer verbs and adjectives and less than one thousand adverbsthe average polysemy for nouns at both sense and homograph levels is roughly the same as the overall corpus average although it is noticably higher for verbs at the sense levelat the sense level the average polysemy figures are much lower for adjectives and adverbsthis is because it is common for english words to act as either a noun or a verb and since these are the most polysemous grammatical categories the average polysemy count becomes large due to the cumulative effect of polysemy across grammatical categorieshowever words that can act as adjectives or adverbs are unlikely to be nouns or verbsthis plus the fact that adjectives and adverbs are generally less polysemous in ldoce means that their average polysemy in text is far lower than it is for nouns or verbstable 7 shows the accuracy of our system over the four subcorporawe can see that the tagger achieves higher results at the homograph level than the sense level on each of the four subcorpora which is consistent with the result over the whole corpusthere is quite a difference in the tagger results across the different subcorpora91 for nouns and 70 for adverbsperhaps the learning algorithm does not perform as well on adverbs because that corpus is significantly smaller than the other threethis hypothesis was checked by testing our system on portions of each of the three subcorpora that were roughly equal in size to the adverb subcorpuswe found that the reduced data caused a slight loss of accuracy on each of the three subcorpora however there was still a marked difference between the results for the adverb subcorpus and the other threefurther analysis showed that the differences in performance over different subcorpora seem linked to the behavior of different partial taggers when used in combinationin the following section we describe this behavior in more detailin order to gauge the contribution of each knowledge source separately we implemented a set of simple disambiguation algorithms each of which uses the output from a single partial taggereach algorithm takes the result of its partial tagger and checks it against the disambiguated text to see if it is correctif the partial tagger returns more than one sense as do the simulated annealing subject code and selectional preference taggers the first sense is taken to break the tiefor the partial tagger based on yarowsky subjectcode algorithm we choose the sense with the highest saliency valueif more than one sense has been assigned the maximum value the tie is again broken by choosing the first sensetherefore each partial tagger returns a single sense and the exact match metric is used to determine the proportion of tokens for which that tagger returns the correct sensethe partofspeech filter is run before the partial taggers make their decision and so they only consider the set of senses it did not removethe results of each tagger computed at both sense and homograph levels over the evaluation corpus and four subcorpora are shown in table 7we can see that the partial taggers that are most effective are those based on the simulated annealing algorithm and yarowsky subject code approachthe success of these modules supports our decision to use existing disambiguation algorithms that have already been developed rather than creating new onesthe most successful of the partial taggers is the one based on yarowsky algorithm for modelling thesaural categories by wide contextsthis consistently achieves over 70 correct disambiguation and seems particularly successful when disambiguating adverbs it is quite surprising that this algorithm is so successful for adverbs since it would seem quite reasonable to expect an algorithm based on subject codes to be more successful on nouns and less so on modifiers such as adjectives and adverbsyarowsky reports that his algorithm achieves 92 correct disambiguation which is nearly 13 higher than achieved in our implementationhowever yarowsky tested his implementation on a restricted vocabulary of 12 words the majority of which were nouns and used roget large categories as sensesthe baseline performance for this corpus is 665 considerably higher than the 309 computed for the corpus used in our experimentsanother possible reason for the difference in results is the fact that yarowsky used smoothing algorithms to avoid problems with the probability estimates caused by data sparsenesswe did not employ these procedures and used simple corpus frequency counts when calculating the probabilities it is not possible to say for sure that the differences between implementations did not lead to the differences in results but it seems likely that the difference in the semantic granularity of ldoce subject codes and roget categories was an important factorthe second partial tagger based on an existing approach is the one which uses simulated annealing to optimize the overlap of words shared by the dictionary definitions for a set of sensesin section 43 we noted that cowie et al reported 47 correct disambiguation to the sense level using this technique while in our adaptation over 17 more words are correctly disambiguatedour application filtered out senses with the incorrect part of speech in addition to using a different method to calculate overlap that takes account of short definitionsit seems likely that these changes are the source of the improved resultsour least successful partial tagger is the one based on selectional preferencesalthough its overall result is slightly below the overall corpus baseline it is very successful at disambiguating verbsthis is consistent with the work of resnik who reported that many words do not have strong enough selectional restrictions to carry out wsdwe expected preferences to be successful for adjectives as well although this is not the case in our evaluationthis is because the sense discrimination of adjectives is carried out after that for nouns in our algorithm and the former is hindered by the low results of the latteradverbs cannot be disambiguated by preference methods against ldoce because it does not contain the appropriate informationour analysis of the behavior of the individual partial taggers provides some clues to the behavior of the overall system consisting of all taggers on the different subcorpora as shown in table 7the system performs to roughly the same level over the noun verb and adjective subcorpora with only a 3 difference between the best and worst performancethe system worst performance is on the abverb subcorpus where it disambiguates only slightly more than 70 of tokens successfullythis may be due to the fact that only two partial taggers provide evidence for this grammatical categoryhowever the system still manages to disambiguate most of the adverbs to the homograph level successfully and this is probably because the partofspeech filter has ruled out the incorrect homographs not because the partial taggers performed wellone can legitimately wonder whether in fact the different knowledge sources for wsd are all ways of encoding the same semantic information in a similar way that one might suspect transformation rules and statistics encode the same information about partofspeech tag sequences in different formatshowever the fact that an optimized combination of our partial taggers yields a significantly higher figure than any one tagger operating independently shows that they must be orthogonal information sourceswe have already examined the usefulness of partofspeech tags for semantic disambiguation in section 3however we now want to know the effect it has within a system consisting of several disambiguation modulesit was found that accuracy at the sense level reduced to 8787 and to 9336 at the homograph level when the filter was removedalthough the system performance did not decrease by a large amount the partofspeech filter brings the additional benefit of reducing the search space for the three partial taggersin addition the fact that these results are not affected much by the removal of the partofspeech filter shows that the wsd modules alone do a reasonable job of resolving partofspeech ambiguity as a sideeffect of semantic disambiguationpreviously reported wsd systems that enjoyed a high level of accuracy have often operated on restricted vocabularies and employed a single wsd methodologythese methods have often been pursued for sound reasons to do with evaluation but have been limited in their applicability and also in their persuasiveness regarding the scalability and interaction of the various wsd partial methodsthis paper reported a system which disambiguated all content words in a text as defined by a standard machine readable dictionary with a high degree of accuracyour evaluation shows that disambiguation can be carried out with more accurate results when several knowledge sources are combinedit remains unclear exactly what it means to optimize the combination of modules within a learning system like timbl we could in further work treat the partofspeech tagger as a partial tagger and not a filter and we could allow the system to learn some quotoptimalquot weighting of all the partial taggersit also remains an interesting question whether because of the undoubted existence of novel senses in text a sense tagger can ever reach the level that partofspeech tagging hashowever we believe we have shown that interesting combinations of wsd methods on a substantial training corpus are possible and that this can show among other things the relative independence of the types of semantic information expressed by the various forms of lexical inputthe work described here was supported by the european union language engineering project ecran extraction of content research at nearmarket one of the authors was also supported by the epsrc grant malt while writing this paperwe are grateful for the feedback from many colleagues in sheffield especially mark hepple and for the detailed comments from the anonymous reviewers of an earlier version of this papergillian callaghan was extremely helpful in the preparation of the final version of this paperany errors are our own
J01-3001
the interaction of knowledge sources in word sense disambiguationword sense disambiguation is a computational linguistics task likely to benefit from the tradition of combining different knowledge sources in artificial intelligence researchan important step in the exploration of this hypothesis is to determine which linguistic knowledge sources are most useful and whether their combination leads to improved resultswe present a sense tagger which uses several knowledge sourcestested accuracy exceeds 94 on our evaluation corpusour system attempts to disambiguate all content words in running text rather than limiting itself to treating a restricted vocabulary of wordsit is argued that this approach is more likely to assist the creation of practical systemswe present a classifier combination framework where disambiguation methods were combined using the timbl memorybased approach we use longman dictionary of contemporary english as sense inventorywe use pos tags of the focus word itself to aid sense disambiguations related to syntactic differenceswe suggest that use of both syntactic and lexical features will improve disambiguation accuracies
automatic verb classification based on statistical distributions of argument structure automatic acquisition of lexical knowledge is critical to a wide range of natural language processing tasks especially important is knowledge about verbs which are the primary source of relational information in a sentencethe predicateargument structure that relates an action or state to its participants in this work we report on supervised learning experiments to automatically classify three major types of english verbs based on their argument structurespecifically the thematic roles they assign to participants we use linguisticallymotivated statistical indicators extracted from large annotated corpora to train the classifier achieving 698 accuracy for a task whose baseline is 34 and whose expertbased upper bound we calculate at 865 a detailed analysis of the performance of the algorithm and of its errors confirms that the proposed features capture properties related to the argument structure of the verbs our results validate our hypotheses that knowledge about thematic relations is crucial for verb classification and that it can be gleaned from a corpus by automatic means we thus demonstrate an effective combination of deeper linguistic knowledge with the robustness and scalability of statistical techniques automatic acquisition of lexical knowledge is critical to a wide range of natural language processing tasksespecially important is knowledge about verbs which are the primary source of relational information in a sentencethe predicateargument structure that relates an action or state to its participants in this work we report on supervised learning experiments to automatically classify three major types of english verbs based on their argument structurespecifically the thematic roles they assign to participantswe use linguisticallymotivated statistical indicators extracted from large annotated corpora to train the classifier achieving 698 accuracy for a task whose baseline is 34 and whose expertbased upper bound we calculate at 865a detailed analysis of the performance of the algorithm and of its errors confirms that the proposed features capture properties related to the argument structure of the verbsour results validate our hypotheses that knowledge about thematic relations is crucial for verb classification and that it can be gleaned from a corpus by automatic meanswe thus demonstrate an effective combination of deeper linguistic knowledge with the robustness and scalability of statistical techniquesautomatic acquisition of lexical knowledge is critical to a wide range of natural language processing tasks especially important is knowledge about verbs which are the primary source of relational information in a sentencethe predicateargument structure that relates an action or state to its participants in facing the task of automatic acquisition of knowledge about verbs two basic questions must be addressed in answering these questions some approaches to lexical acquisition have focused on learning syntactic information about verbs by automatically extracting subcategorization frames from a corpus or machinereadable dictionary examples of verbs from the three optionally intransitive classesunergative the horse raced past the barnthe jockey raced the horse past the barnunaccusative the butter melted in the panthe cook melted the butter in the panobjectdrop the boy playedthe boy played soccerother work has attempted to learn deeper semantic properties such as selectional restrictions verbal aspect or lexicalsemantic verb classes such as those proposed by levin in this paper we focus on argument structurethe thematic roles assigned by a verb to its argumentsas the way in which the relational semantics of the verb is represented at the syntactic levelspecifically our proposal is to automatically classify verbs based on argument structure properties using statistical corpusbased methodswe address the problem of classification because it provides a means for lexical organization which can effectively capture generalizations over verbs within the context of classification the use of argument structure provides a finer discrimination among verbs than that induced by subcategorization frames but a coarser classification than that proposed by levin this level of classification granularity appears to be appropriate for numerous language engineering tasksbecause knowledge of argument structure captures fundamental participantevent relations it is crucial in parsing and generation in machine translation and in information retrieval and extraction our use of statistical corpusbased methods to achieve this level of classification is motivated by our hypothesis that classbased differences in argument structure are reflected in statistics over the usages of the component verbs and that those statistics can be automatically extracted from a large annotated corpusthe particular classification problem within which we investigate this hypothesis is the task of learning the three major classes of optionally intransitive verbs in english unergative unaccusative and objectdrop verbstable 1 shows an example of a verb from each class in its transitive and intransitive usagesthese three classes are motivated by theoretical linguistic properties furthermore it appears that the classes capture typological distinctions that are useful for machine translation as well as processing distinctions that are useful for generating naturally occurring language the question then is what underlies these distinctionswe identify the property that precisely distinguishes among these three classes as that of argument structure ie the thematic roles assigned by the verbsthe thematic roles for each class and their mapping to subject and object positions are summarized in table 2note that verbs across these three classes allow the same subcategorization frames thus classification based on subcategorization alone would not distinguish themon the other hand each of the three classes is comprised of multiple levin classes because the latter reflect more detailed semantic distinctions among the verbs thus classification based on levin labeling would miss generalizations across the three broader classesby contrast as shown in table 2 each class has a unique pattern of thematic assignments which categorize the verbs precisely into the three classes of interestalthough the granularity of our classification differs from levin we draw on her hypothesis that semantic properties of verbs are reflected in their syntactic behaviorthe behavior that levin focuses on is the notion of diathesis alternationan alternation in the expression of the arguments of a verb such as the different mappings between transitive and intransitive that our verbs undergowhether a verb participates in a particular diathesis alternation or not is a key factor in levin approach to classificationwe like others in a computational framework have extended this idea by showing that statistics over the altemants of a verb effectively capture information about its class in our specific task we analyze the pattern of thematic assignments given in table 2 to develop statistical indicators that are able to determine the class of an optionally intransitive verb by capturing information across its transitive and intransitive alternantsthese indicators serve as input to a machine learning algorithm under a supervised training methodology which produces an automatic classification system for our three verb classessince we rely on patterns of behavior across multiple occurrences of a verb we begin with the problem of assigning a single class to the entire set of usages of a verb within the corpusfor example we measure properties across all occurrences of a word such as raced in order to assign a single classification to the lexical entry for the verb racethis contrasts with work classifying individual occurrences of a verb in each local context which have typically relied on training that includes instances of the verbs to be classifiedessentially developing a bias that is used in conjunction with the local context to determine the best classification for new instances of previously seen verbsby contrast our method assigns a classification to verbs that have not previously been seen in the training datathus while we do not as yet assign different classes to the instances of a verb we can assign a single predominant class to new verbs that have never been encounteredto preview our results we demonstrate that combining just five numerical indicators automatically extracted from large text corpora is sufficient to reduce the error rate in this classification task by more than 50 over chancespecifically we achieve almost 70 accuracy in a task whose baseline performance is 34 and whose expertbased upper bound is calculated at 865beyond the interest for the particular classification task at hand this work addresses more general issues concerning verb class distinctions based in argument structurewe evaluate our hypothesis that such distinctions are reflected in statistics over corpora through a computational experimental methodology in which we investigate as indicated each of the subhypotheses below in the context of the three verb classes under study in the following sections we show that all three hypotheses above are borne outin section 2 we describe the argument structure distinctions of our three verb classes in more detailin support of the first hypothesis above we discuss lexical correlates of the underlying differences in thematic assignments that distinguish the three verb classes under investigationin section 3 we show how to approximate these features by simple syntactic counts and how to perform these counts on available corporawe confirm the second hypothesis above by showing that the differences in distribution predicted by the underlying argument structures are largely found in the datain section 4 in a series of machine learning experiments and a detailed analysis of errors we confirm the third hypothesis by showing that the differences in the distribution of the extracted features are successfully used for verb classificationsection 5 evaluates the significance of these results by comparing the program accuracy to an expertbased upper boundwe conclude the paper with a discussion of its contributions comparison to related work and suggestions for future extensionsour task is to automatically build a classifier that can distinguish the three major classes of optionally intransitive verbs in englishas described above these classes are differentiated by their argument structuresin the first subsection below we elaborate on our description of the thematic role assignments for each of the verb classes under investigationunergative unaccusative and objectdropthis analysis yields a distinctive pattern of thematic assignment for each classof course the key to any automatic classification task is to determine a set of useful features for discriminating the items to be classifiedin the second subsection below we show how the analysis of thematic distinctions enables us to determine lexical properties that we hypothesize will exhibit useful detectable frequency differences in our corpora and thus serve as the machine learning features for our classification experimentsthe verb classes are exemplified below in sentences repeated from table 1 for ease of expositionunergative the horse raced past the barn the jockey raced the horse past the barnunaccusative the butter melted in the pan the cook melted the butter in the panobjectdrop the boy played the boy played soccerthe example sentences illustrate that all three classes participate in a diathesis alternation that relates a transitive and intransitive form of the verbhowever according to levin each class exhibits a different type of diathesis alternation which is determined by the particular semantic relations of the arguments to the verbwe make these distinctions explicit by drawing on a standard notion of thematic role as each class has a distinct pattern of thematic assignments we assume here that a thematic role is a label taken from a fixed inventory of grammaticalized semantic relations for example an agent is the doer of an action and a theme is the entity undergoing an event while admitting that such notions as agent and theme lack formal definitions the distinctions are clear enough to discriminate our three verb classesfor our purposes these roles can simply be thought of as semantic labels which are nondecomposable but there is nothing in our approach that rests on this assumptionthus our approach would also be compatible with a featurebased definition of participant roles as long as the features capture such general distinctions as for example the doer of an action and the entity acted upon note that in our focus on verb class distinctions we have not considered finergrained features that rely on more specific semantic features such as for example that the subject of the intransitive melt must be something that can change from solid to liquidwhile this type of feature may be important for semantic distinctions among individual verbs it thus far seems irrelevant to the level of verb classification that we adopt which groups verbs more broadly according to syntactic and semantic propertiesour analysis of thematic assignmentwhich was summarized in table 2 repeated here as table 3is elaborated here for each verb classthe sentences in above illustrate the relevant alternants of an unergative verb raceunergatives are intransitive action verbs whose transitive form as in can be the causative counterpart of the intransitive form the type of causative alternation that unergatives participate m is the quotinduced action alternationquot according to levin for our thematic analysis we note that the subject of an intransitive activity verb is specified to be an agentthe subject of the transitive form is indicated by the label agent of causation which indicates that the thematic role assigned to the subject is marked as the role which is introduced with the causing eventin a causative alternation the semantic argument of the subject of the intransitive surfaces as the object of the transitive for unergatives this argument is an agent and thus the alternation yields an object in the transitive form that receives an agent thematic role these thematic assignments are shown in the first row of table 3the sentences in illustrate the corresponding forms of an unaccusative verb meltunaccusatives are intransitive changeofstate verbs as in the transitive counterpart for these verbs also exhibits a causative alternation as in this is the quotcausativeinchoative alternationquot like unergatives the subject of a transitive unaccusative is marked as the agent of causationunlike unergatives though the alternating argument of an unaccusative is an entity undergoing a change of state without active participation and is therefore a themethe resulting pattern of thematic assignments is indicated in the second row of table 3the sentences in use an objectdrop verb playthese are activity verbs that exhibit a noncausative diathesis alternation in which the object is simply optionalthis is dubbed quotthe unexpressed object alternationquot and has several subtypes that we do not distinguish herethe thematic assignment for these verbs is simply agent for the subject and theme for the optional object see the last row of table 3for further details and support of this analysis please see the discussion in stevenson and merlo and merlo and stevenson for our purposes here the important fact to note is that each of the three classes can be uniquely identified by the pattern of thematic assignments across the two alternants of the verbs22 features for automatic classification our next task then is to derive from these thematic patterns useful features for automatically classifying the verbsin what follows we refer to the columns of table 3 to explain how we expect the thematic distinctions to give rise to distributional properties which when appropriately approximated through corpus counts will discriminate across the three classestransitivity consider the first two columns of thematic roles in table 3 which illustrate the role assignment in the transitive constructionthe prague school notion of linguistic markedness enables us to establish a scale of markedness of these thematic assignments and make a principled prediction about their frequency of occurrencetypical tests to determine the unmarked element of a pair or scale are simplicitythe unmarked element is simpler distributionthe unmarked member is more widely attested across languages and frequencythe unmarked member is more frequent the claim of markedness theory is that once an element has been identified by one test as the unmarked element of a scale then all other tests will be correlatedthe three thematic assignments appear to be ranked on a scale by the simplicity and distribution tests as we describe belowfrom this we can conclude that frequency as a third correlated test should also be ranked by the same scale and we can therefore make predictions about the expected frequencies of the three thematic assignmentsfirst we note that the specification of an agent of causation for transitive unergatives and unaccusatives indicates a causative constructioncausative constructions relate two events the causing event and the core event described by the intransitive verb the agent of causation is the agent of the causing eventthis double event structure can be considered as more complex than the single event that is found in a transitive objectdrop verb the simplicity test thus indicates that the causative unergatives and unaccusatives are marked in comparison to the transitive objectdrop verbswe further observe that the causative transitive of an unergative verb has an agent thematic role in object position which is subordinated to the agent of causation in subject position yielding an unusual quotdouble agentivequot thematic structurethis lexical causativization of unergatives is a distributionally rarer phenomenonfound in fewer languagesthan lexical causatives of unaccusativesin asking native speakers about our verbs we have found that lexical causatives of unergative verbs are not attested in italian french german portuguese gungbe and czechon the other hand the lexical causatives are possible for unaccusative verbs in all these languagesvietnamese appears to allow a very restricted form of causativization of unergatives limited to only those cases that have a comitative readingthe typological distribution test thus indicates that unergatives are more marked than unaccusatives in the transitive formfrom these observations we can conclude that unergatives have the most marked transitive argument structure unaccusatives have an intermediately marked transitive argument structure and objectdrops have the least marked transitive argument structure of the threeunder the assumptions of markedness theory outlined above we then predict that unergatives are the least frequent in the transitive that unaccusatives have intermediate frequency in the transitive and that objectdrop verbs are the most frequent in the transitivecausativity due to the causative alternation of unergatives and unaccusatives the thematic role of the subject of the intransitive is identical to that of the object of the transitive as shown in the second and third columns of thematic roles in table 3given the identity of thematic role mapped to subject and object positions across the two alternants we expect to observe the same noun occurring at times as subject of the verb and at other times as object of the verbin contrast for objectdrop verbs the thematic role of the subject of the intransitive is identical to that of the subject of the transitive not the object of the transitivewe therefore expect that it will be less common for the same noun to occur in subject and object position across instances of the same objectdrop verbthus we hypothesize that this pattern of thematic role assignments will be reflected in a differential amount of usage across the classes of the same nouns as subjects and objects for a given verbgenerally we would expect that causative verbs would have a greater degree of overlap of nouns in subject and object position than noncausative transitive verbs however since the causative is a transitive use and the transitive use of unergatives is expected to be rare we do not expect unergatives to exhibit a high degree of detectable overlap in a corpusthus this overlap of subjects and objects should primarily distinguish unaccusatives from the other two classes animacy finally considering the roles in the first and last columns of thematic assignments in table 3 we observe that unergative and objectdrop verbs assign an agentive role to their subject in both the transitive and intransitive while unaccusatives assign an agentive role to their subject only in the transitiveunder the assumption that the intransitive use of unaccusatives is not rare we then expect that unaccusatives will occur less often overall with an agentive subject than will the other two verb classeson the further assumption that agents tend to be animate entities more so than themes are we expect that unaccusatives will occur less frequently with an animate subject compared to unergative and objectdrop verbsnote the importance of our use of frequency distributions the claim is not that only agents can be animate but rather that nouns that receive an agent role will more often be animate than nouns that receive a theme roleadditional features the above interactions between thematic roles and the syntactic expressions of arguments thus lead to three features whose distributional properties appear promising for distinguishing unergative unaccusative and objectdrop verbs transitivity causativity and animacy of subjectwe also investigate two additional syntactic features the use of the passive or active voice and the use of the past participle or simple past partofspeech tag these features are related to the transitiveintransitive alternation since a passive use implies a transitive use of the verb as well as to the use of a past participle form of the verbtable 4 summarizes the features we derive from the thematic properties and our expectations concerning their frequency of usewe hypothesize that these five features will exhibit distributional differences in the observed usages of the verbs that can be used for classificationin the next section we describe the actual corpus counts that we develop to approximate the features we have identifiedclearly some of the features we have proposed are difficult or impossible to automatically extract with high accuracy from a 2 for our sample verbs the statistical correlation between the transitive and passive features is highly significant as is the correlation between the transitive and past participle features large corpus given the current state of annotationhowever we do assume that currently available corpora such as the wall street journal provide a representative and large enough sample of language from which to gather corpus counts that can approximate the distributional patterns of the verb class alternationsour work draws on two text corporaone an automatically tagged combined corpus of 65 million words the second an automatically parsed corpus of 29 million words using these corpora we develop counting procedures that yield relative frequency distributions for approximations to the five linguistic features we have determined over a sample of verbs from our three classeswe chose a set of 20 verbs from each class based primarily on the classification in levin 3 the complete list of verbs appears in table 5 the group 1group 2 designation is explained below in the section on countingas indicated in the table unergatives are mannerofmotion verbs unaccusatives are changeofstate verbs while objectdrop verbs were taken from a variety of classes in levin classification all of which undergo the unexpressed object alternationthe most frequently used classes are verbs of change of possession imagecreation verbs and verbs of creation and transformationthe selection of verbs was based partly on our intuitive judgment that the verbs were likely to be used with sufficient frequency in the wsjalso each 3 we used an equal number of verbs from each class in order to have a balanced group of itemsone potential disadvantage of this decision is that each verb class is represented equally even though they may not be equally frequent in the corporaalthough we lose the relative frequency information among the classes that could provide a better bias for assigning a default classification we have the advantage that our classifier will be equally informed about each classnote that there are only 19 unaccusative verbs because ripped which was initially counted in the unaccusatives was then excluded from the analysis as it occurred mostly in a very different usage in the corpus from the intended optionally intransitive usageobjectdrop unexpressed played painted kicked carved reaped washed danced object alternation yelled typed knitted borrowed inherited organized rented sketched cleaned packed studied swallowed called verb presents the same form in the simple past and in the past participle in order to simplify the counting procedure we included only the quotedquot form of the verb on the assumption that counts on this single verb form would approximate the distribution of the features across all forms of the verbadditionally as far as we were able given the preceding constraints we selected verbs that could occur in the transitive and in the passivefinally we aimed for a frequency cutoff of 10 occurrences or more for each verb although for unergatives we had to use one verb that only occurred 8 times in order to have 20 verbs that satisfied the other criteria abovein performing this kind of corpus analysis one has to recognize the fact that current corpus annotations do not distinguish verb sensesin these counts we did not distinguish a core sense of the verb from an extended use of the verbso for instance the sentence consumer spending jumped 17 in february after a sharp drop the month before is counted as an occurrence of the mannerofmotion verb jump in its intransitive formthis particular sense extension has a transitive alternant but not a causative transitive thus while the possible subcategorizations remain the same rates of transitivity and causativity may be different than for the literal mannerofmotion sensethis is an unavoidable result of using simple automatic extraction methods given the current state of annotation of corporafor each occurrence of each verb we counted whether it was in a transitive or intransitive use in a passive or active use in a past participle or simple past use in a causative or noncausative use and with an animate subject or not 4 note that except for the vbn feature for which we simply extract the pos tag from the corpus all other counts are approximations to the actual linguistic behaviour of the verb as we describe in detail belowthe first three counts were performed on the tagged acldci corpus available from the linguistic data consortium which includes the brown corpus and years 19871989 of the wall street journal a combined corpus in excess of 65 million wordsthe counts for these features proceeded as follows label within the tagged corpuseach of the above three counts was normalized over all occurrences of the quotedquot form of the verb yielding a single relative frequency measure for each verb for that feature ie percent transitive use percent active use and percent vbn use respectivelythe last two counts were performed on a parsed version of the 1988 year of the wall street journal so that we could extract subjects and objects of the verbs more accuratelythis corpus of 29 million words was provided to us by michael collins and was automatically parsed with the parser described in collins 5 the counts and their justification are described here causative feature was approximated by the following steps intended to capture the degree to which the subject of a verb can also occur as its objectspecifically for each verb occurrence the subject and object were extracted from the parsed corpusthe observed subjects across all occurrences of the verb were placed into one multiset of nouns and the observed objects into a second multiset of nounsthen the proportion of overlap between the two multisets was calculatedwe define overlap as the largest multiset of elements belonging to both the 5 readers might be concerned about the portability of this method to languages for which no large parsed corpus is availableit is possible that using a fully parsed corpus is not necessaryour results were replicated in english without the need for a fully parsed corpus our method was applied to 23 million words of the wsj that were automatically tagged with ratnaparkhi maximum entropy tagger and chunked with the partial parser cass the results are very similar to ours suggesting that a more accurate tagger than the one used on our corpus might in fact be sufficient to overcome the fact that no full parse is available subject and the object multisets eg the overlap between a a a b and a is a a athe proportion is the ratio between the cardinality of the overlap multiset and the sum of the cardinality of the subject and object multisetsfor example for the simple sets of characters above the ratio would be 35 yielding a value of 60 for the caus feature manual determination of the animacy of extracted subjects or reference to an online resource such as wordnet for determining animacyto approximate animacy with a feature that can be extracted automatically and without reference to a resource external to the corpus we take advantage of the wellattested animacy hierarchy according to which pronouns are the most animate the hypothesis is that the words i we you she he and they most often refer to animate entitiesthis hypothesis was confirmed by extracting 100 occurrences of the pronoun they which can be either animate or inanimate from our 65 million word corpusthe occurrences immediately preceded a verbafter eliminating repetitions 94 occurrences were left which were classified by hand yielding 71 animate pronouns 11 inanimate pronouns and 12 unclassified occurrences thus at least 76 of usages of they were animate we assume the percentage of animate usages of the other pronouns to be even highersince the hypothesis was confirmed we count pronouns in subject position the values for the feature were determined by automatically extracting all subjectverb tuples including our 59 example verbs from the parsed corpus and computing the ratio of occurrences of pronoun subjects to all subjects for each verbfinally as indicated in table 5 the verbs are designated as belonging to quotgroup 1quot or quotgroup 2quotall the verbs are treated equally in our data analysis and in the machine learning experiments but this designation does indicate a difference in details of the counting procedures described abovethe verbs in group 1 had been used in an earlier study in which it was important to minimize noisy data so they generally underwent greater manual intervention in the countsin adding group 2 for the classification experiment we chose to minimize the intervention in order to demonstrate that the classification process is robust enough to withstand the resulting noise in the datafor group 2 the transitivity voice and vbn counts were done automatically without any manual interventionfor group 1 these three counts were done automatically by regular expression patterns and then subjected to correction partly by hand and partly automatically by one of the authorsfor transitivity the adjustments vary for the individual verbsmost of the reassignments from a transitive to an intransitive labelling occurred when the following noun was not the direct object but rather a measure phrase or a datemost of the reassignments from intransitive to transitive occurred when a particle or a preposition following the verb did not introduce a prepositional phrase but instead indicated a passive form or was part of a phrasal verbsome verbs were mostly used adjectivally in which case they were excluded from the transitivity countsfor voice the required adjustments included cases of coordination of the past participle when the verb was preceded by a conjunction or a commathese were collected and classified by hand as passive or active based on intuitionsimilarly partial adjustments to the vbn counts were made by handfor the causativity feature subjects and objects were determined by manual inspection of the corpus for verbs belonging to group 1 while they were extracted automatically from the parsed corpus for group 2the group 1 verbs were sampled in three ways depending on total frequencyfor verbs with less than 150 occurrences all instances of the verbs were used for subjectobject extractionfor verbs whose total frequency was greater than 150 but whose vbd frequency was in the range 100200 we extracted subjects and objects of the vbd occurrences onlyfor higher frequency verbs we used only the first 100 vbd occurrencesthe same script for computing the overlap of the extracted subjects and objects was then used on the resulting subjectverb and verbobject tuples for both group 1 and group 2 verbsthe animacy feature was calculated over subjectverb tuples extracted automatically for both groups of verbs from the parsed corpusthe data collection described above yields the following data points in total trans 27403 pass 20481 vbn 36297 caus 11307 anim 7542the aggregate means by class of the normalized frequencies for all verbs are shown in table 6 item by item distributions are provided in appendix a and raw counts are available from the authorsnote that aggregate means are shown for illustration purposes onlyall machine learning experiments are performed on the individual normalized frequencies for each verb as given in appendix athe observed distributions of each feature are indeed roughly as expected according to the description in section 2unergatives show a very low relative frequency of the trans feature followed by unaccusatives then objectdrop verbsunaccusative verbs show a high frequency of the caus feature and a low frequency of the anim feature compared to the other classessomewhat unexpectedly objectdrop verbs exhibit a nonzero mean caus value leading to a threeway causative distinction among the verb classeswe suspect that the approximation that we used for causative usethe overlap between subjects 6 for this last set of highfrequency verbs we used the first 100 occurrences as the simplest way to collect the samplein response to an anonymous reviewer concern we later verified that these counts were not different from counts obtained by random sampling of 100 vbd occurrencesa paired ttest of the two sets of counts indicates that the two sets of counts are not statistically different and objects for a verbalso captures a quotreciprocityquot effect for some objectdrop verbs in which subjects and objects can be similar types of entitiesfinally although expected to be a redundant indicator of transitivity pass and vbn unlike trans have very similar values for unaccusative and objectdrop verbs indicating that their distributions are sensitive to factors we have not yet investigatedone issue we must address is how precisely the automatic counts reflect the actual linguistic behaviour of the verbsthat is we must be assured that the patterns we note in the data in table 6 are accurate reflections of the differential behaviour of the verb classes and not an artifact of the way in which we estimate the features or a result of inaccuracies in the countsin order to evaluate the accuracy of our feature counts we selected two verbs from each class and determined the quottruequot value of each feature for each of those six verbs through manual countingthe six verbs were randomly selected from the group 2 subset of the verbs since counts for group 2 verbs had not undergone manual correctionthis allows us to determine the accuracy of the fully automatic counting proceduresthe selected verbs are hopped scurried folded stabilized inherited swallowed for verbs that had a frequency of over 100 in the quotedquot form we performed the manual counts on the first 100 occurrencestable 7 shows the results of the manual counts reported as proportions to facilitate comparison to the normalized automatic counts shown in adjoining columnswe observe first that overall most errors in the automatic counts occur in the unaccusative and objectdrop verbswhile tagging errors affect the vbn feature for all of the verbs somewhat we note that trans and pass are consistently underestimated for unaccusative and objectdrop verbsthese errors make the unaccusative and objectdrop feature values more similar to each other and therefore potentially harder to distinguishfurthermore because the trans and pass values are underestimated by the automatic counts and therefore lower in value they are also closer to the values for the unergative verbsfor the caus feature we predict the highest values for the unaccusative verbs and while that prediction is confirmed the automatic counts for that class also show the most errorsfinally although the general pattern of higher values for the anim feature of unergatives and objectdrop verbs is preserved in the automatic counts the feature is underestimated for almost all the verbs again making the values for that feature closer across the classes than they are in realitywe conclude that although there are inaccuracies in all the counts the general patterns expected based on our analysis of the verb classes hold in both the manual and automatic countserrors in the estimating and counting procedures are therefore not likely to be responsible for the pattern of data in table 6 above which generally matches our predictionsfurthermore the errors at least for this random sample of verbs occur in a direction that makes our task of distinguishing the classes more difficult and indicates that developing more accurate search patterns may possibly sharpen the class distinctions and improve the classification performancein this section we turn to our computational experiments that investigate whether the statistical indicators of thematic properties that we have developed can in fact be used to classify verbsrecall that the task we have set ourselves is that of automatically learning the best class for a set of usages of a verb as opposed to classifying individual occurrences of the verbthe frequency distributions of our features yield a vector for each verb that represents the estimated values for the verb on each dimension across the entire corpus vector template verbname trans pass vbn caus anim class example opened 69 09 21 16 36 unacd the resulting set of 59 vectors constitutes the data for our machine learning experimentswe use this data to train an automatic classifier to determine given the feature values for a new verb which of the three major classes of english optionally intransitive verbs it belongs toin pilot experiments on a subset of the features we investigated a number of supervised machine learning methods that produce automatic classifiers as well as hierarchical clustering see stevenson et al for more detailbecause we achieved approximately the same level of performance in all cases we narrowed our further experimentation to the publicly available version of the c50 machine learning system a newer version of c45 due to its ease of use and wide availabilitythe c50 system generates both decision trees and corresponding rule sets from a training set of known classificationsin our experiments we found little to no difference in performance between the trees and rule sets and report only the rule set resultsin the experiments below we follow two methodologies in training and testing each of which tests a subset of cases held out from the training datathus in all cases the results we report are on test data that was never seen in trainingthe first training and testing methodology we follow is 10fold crossvalidationin this approach the system randomly divides the data into ten parts and runs ten times on a different 90trainingdata10testdata split yielding an average accuracy and standard error across the ten test setsthis training methodology is very useful for 7 one anonymous reviewer raised the concern that we do not test on verbs that were unseen by the authors prior to finalizing the specific features to counthowever this does not reduce the generality of our resultsthe features we use are motivated by linguistic theory and derived from the set of thematic properties that discriminate the verb classesit is therefore very unlikely that they are skewed to the particular verbs we have chosenfurthermore our crossvalidation experiments described in the next subsection show that our results hold across a very large number of randomly selected subsets of this sample of verbs our application as it yields performance measures across a large number of training data test data sets avoiding the problems of outliers in a single random selection from a relatively small data set such as oursthe second methodology is a single holdout training and testing approachhere the system is run n times where n is the size of the data set each time holding out a single data vector as the test case and using the remaining n1 vectors as the training setthe single holdout methodology yields an overall accuracy rate but also unlike crossvalidationgives us classification results on each individual data vectorthis property enables us to analyze differential performance on the individual verbs and across the different verb classesunder both training and testing methodologies the baseline performance in this taska threeway classificationis 339in the single holdout methodology there are 59 test cases with 20 19 and 20 verbs each from the unergative unaccusative and objectdrop classes respectivelychance performance of picking a single class label as a default and assigning it to all cases would yield at most 20 out of the 59 cases correct or 339for the crossvalidation methodology the determination of a baseline is slightly more complex as we are testing on a random selection of 10 of the full data set in each runthe 339 figure represents the expected relative proportion of a test set that would be labelled correctly by assignment of a default class label to the entire test setalthough the precise makeup of the test cases vary on average the test set will represent the class membership proportions of the entire set of verbsthus as with the single holdout approach chance accuracy corresponds to a maximum of 2059 or 339 of the test set being labelled correctlythe theoretical maximum accuracy for the task is of course 100 although in section 5 we discuss some classification results from human experts that indicate that a more realistic expectation is much lower we first report the results of experiments using a training methodology of 10fold crossvalidation repeated 50 timesthis means that the 10fold crossvalidation procedure is repeated for 50 different random divisions of the datathe numbers reported are the averages of the results over all the trialsthat is the average accuracy and standard error from each random division of the data are averaged across the 50 different random divisionsthis large number of experimental trials gives us a very tight bound on the mean accuracy reported enabling us to determine with high confidence the statistical significance of differences in resultstable 8 shows that performance of classification using individual features varies greatly from little above the baseline to almost 22 above the baseline or a reduction of a third of the error rate a very good result for a single featuremoreover it seems that the highestfrequency verbs pose the most problems to the programin addition the only verb of log frequency 4 is notin conclusion we do not find that there is a simple mapping from frequency to accuracyin particular it is not the case that more frequent classes or verbs are more accurately classifiedone factor possibly contributing to the poorer performance on unaccusatives and objectdrops is the greater degree of error in the automatic counting procedures for these verbs which we discussed in section 32in addition to exploration of other linguistic features another area of future work is to develop better search patterns for transitivity and passive in particularunfortunately one limiting factor in automatic counting is that we inherit the inevitable errors in pos tags in an automatically tagged corpusfor example while the unergative verbs are classified highly accurately we note that two of the three errors in misclassifying unergatives are due to a high degree of error in taggingthe verb galloped is incorrectly tagged vbn instead of vbd in all 12 of its uses in the corpus and the verb paraded is incorrectly tagged vbn instead of vbd in 13 of its 33 uses in the corpusafter correcting only the vbn feature of these two verbs to reflect the actual part of speech overall accuracy in classification increases by almost 10 illustrating the importance of both accurate counts and accurate annotation of the corporawe can further use the single holdout results to determine the contribution of each feature to accuracy within each classwe do this by comparing the class labels assigned using the full set of five features with the class labels assigned using each size 4 subset of featuresthe difference in classifications between each fourfeature subset and the full set of features indicates the changes in class labels that we can attribute to the added feature in going from the fourfeature to fivefeature setthus we can see whether the features indeed contribute to discriminating the classes in the manner predicted in section 22 and summarized here in table 13we illustrate the data with a set of confusion matrices in tables 14 and 15 which show the pattern of errors according to class label for each set of featuresin each confusion matrix the rows indicate the actual class of incorrectly classified verbs and the columns indicate the assigned classfor example the first row of the first panel of table 14 shows that one unergative was incorrectly labelled as unaccusative and two unergatives as objectdropto determine the confusability of any two classes we think that this property relates to the notion of internal and external causation that is an important factor in distinguishing unergative and unaccusative verbswe refer the interested reader to stevenson and merlo which discusses the latter issue in more detail opposite of discriminability we look at two cells in the matrix the one in which verbs of the first class were assigned the label of the second class and the one in which verbs of the second class were assigned the label of the first classby examining the decrease in confusability of each pair of classes in going from a fourfeature experiment to the fivefeature experiment we gain insight into how well the added feature helps to discriminate each pair of classesan analysis of the confusion matrices reveals that the behavior of the features largely conforms to our linguistic predictions leading us to conclude that the features we counted worked largely for the reasons we had hypothesizedwe expected caus and anim to be particularly helpful in identifying unaccusatives and these predictions are confirmedcompare the second to the first panel of table 14 we see that without the caus feature the confusability between unaccusatives and unergatives and between unaccusatives and objectdrops is 9 and 7 errors respectively but when caus is added to the set of features the confusability between these pairs of classes drops substantially to 5 and 6 errors respectivelyon the other hand the confusability between unergatives and objectdrops becomes slightly worse the latter indicates that the improvement in unaccusatives is not simply due to an acrosstheboard improvement in accuracy as a result of having more featureswe see a similar pattern with the anim featurecomparing the third to the first panel of table 14 we see an even larger improvement in discriminability of unaccusatives when the anim feature is addedthe confusability of unaccusatives and unergatives drops from 7 errors to 5 errors and of unaccusatives and objectdrops from 11 errors to 6 errorsagain confusability of unergatives and objectdrops is worse with an increase in errors of 5 to 7we had predicted that the trans feature would make a threeway distinction among the verb classes based on its predicted linear relationship between the classes we had further expected that pass and vbn would behave similarly since these features are correlated to transto make a threeway distinction among the verb classes we would expect confusability between all three pairs of verb classes to decrease with the addition of trans pass or vbnwe find that these predictions are confirmed in partfirst consider the trans featurecomparing the second to the first panel of table 15 we find that unergatives are already accurately classified and the addition of trans to the set does indeed greatly reduce the confusability of unaccusatives and objectdrops with the number of errors dropping from 12 to 6however we also observe that the confusability of unergatives and unaccusatives is not improved and the confusability of unergatives and objectdrops is worsened with the addition of the trans feature with errors in the latter case increasing from 4 to 7we conclude that the expected threeway discriminability of trans is most apparent in the reduced confusion of unaccusative and objectdrop verbsour initial prediction was that pass and vbn would behave similarly to trans that is also making a threeway distinction among the classesalthough the aggregate data revealed little difference in these feature values between unaccusatives and objectdropscomparing the third to the first panel of table 15 we observe that the addition of the pass feature hinders the discriminability of unergatives and unaccusatives it does help in discriminating the other pairs of classes but only slightly the vbn feature shows a similar pattern but is much more helpful at distinguishing unergatives from objectdrops and objectdrops from unaccusativesin comparing the fourth to the first panel of table 15 we find that the confusability of unergatives and objectdrops is reduced from 9 errors to 7 and of unaccusatives and objectdrops from 10 errors to 6the latter result is somewhat surprising since the aggregate vbn data for the unaccusative and objectdrop classes are virtually identicalwe conclude that contribution of a feature to classification is not predictable from the apparent discriminability of its numeric values across the classesthis observation emphasizes the importance of an experimental method to evaluating our approach to verb classificationin order to evaluate the performance of the algorithm in practice we need to compare it to the accuracy of classification performed by an expert which gives a realistic upper bound for the taskthe lively theoretical debate on class membership of verbs and the complex nature of the linguistic information necessary to accomplish this task led us to believe that the task is difficult and not likely to be performed at 100 accuracy even by experts and is also likely to show differences in classification between expertswe report here the results of two experiments which measure expert accuracy in classifying our verbs as well as interexpert agreementto enable comparison of responses we performed a closedform questionnaire study where the number and types of the target classes are defined in advance for which we prepared a forcedchoice and a nonforcedchoice variantthe forcedchoice study provides data for a maximally restricted experimental situation which corresponds most closely to the automatic verb classification taskhowever we are also interested in slightly more natural resultsprovided by the nonforcedchoice taskwhere the experts can assign the verbs to an quotothersquot categorywe asked three experts in lexical semantics to complete the forcedchoice electronic questionnaire studyneither author was among the three experts who were all professionals in computational or theoretical linguistics with a specialty in lexical semanticsmaterials consisted of individually randomized lists of the same 59 verbs used for the machine learning experiments using levin electronic index available from chicago university pressthe verbs were to be classified into the three target classesunergative unaccusative and objectdrop which were described in the instructions table 16 shows an analysis of the results reporting both percent agreement and pairwise agreement among the experts and the programassessing the percentage of verbs on which the experts agree gives us an intuitive measurehowever this measure does not take into account how much the experts agree over the expected agreement by chancethe latter is provided by the kappa statistic which we calculated following klauer even by trained experts when compared to the gold standard with the highest percent agreement with levin at 865second with respect to comparison of the experts among themselves the rate of agreement is never very high and the variability in agreement is considerable ranging from 53 to 66this evaluation is also supported by a 3way agreement measure applying this calculation we find that the percentage of verbs to which the three experts gave the same classification is smaller than any of the pairwise agreements indicating that the experts do not all agree on the same subset of verbsthe observation that the experts often disagree on this difficult task suggests that a combination of expert judgments might increase the upper boundwe tried the simplest combination by creating a new classification using a majority vote each verb was assigned the label given by at least two expertsonly three cases did not have any majority label in these cases we used the classification of the most accurate expertthis new classification does not improve the upper bound reaching only 864 compared to the gold standardthe evaluation is also informative with respect to the performance of the programon the one hand we observe that if we take the best performance achieved by an expert in this task865as the maximum achievable accuracy in classification our algorithm then reduces the error rate over chance by approximately 68 a very respectable resultin fact the accuracy of 695 achieved by the program is only 15 less than one of the human experts in comparison to the gold standardon the other hand the algorithm still does not perform at expert level as indicated by the fact that for all experts the lowest agreement score is with the programone interesting question is whether experts and program disagree on the same verbs and show similar patterns of errorsthe program makes 18 errors in total compared to the gold standardhowever in 9 cases at least one expert agrees with the classification given by the programthe program makes fewer errors on unergatives and comparably many on unaccusatives and objectdrops indicating that members of the latter two classes are quite difficult to classifythis differs from the pattern of average agreement between the experts and levin who agree on 177 unergatives 167 unaccusatives and 113 objectdropsthis clearly indicates that the objectdrop class is the most difficult for the human experts to definethis class is the most heterogeneous in our verb list consisting of verbs from several subclasses of the quotunexpressed object alternationquot class in we conclude that the verb classification task is likely easier for very homogeneous classes and more difficult for more broadly defined classes even when the exemplars share the critical syntactic behaviorson the other hand frequency does not appear to be a simple factor in explaining patterns of agreement between experts or increases in accuracyas in section 43 we again analyze the relation between log frequency of the verbs and classification performance here considering the performance of the expertswe grouped verbs in three log frequency classes verbs with log frequency less than 2 those with log frequency between 2 and 3 and those with log frequency over 3 the lowfrequency group had 24 verbs the intermediatefrequency group had 25 verbs and the highfrequency group had 10 verbs we found that verbs with high and low frequency yield better accuracy and agreement among the experts than the verbs with mid frequencyneither the accuracy of the majority classification nor the accuracy of the expert that had the best agreement with levin were linearly affected by frequencyfor the majority vote verbs with frequency less than 100 yield an accuracy of 92 k 84 verbs with frequency between 100 and 1000 accuracy 80 k 69 and for verbs with frequency over 1000 accuracy 90 k 82for the quotbestquot expert the pattern is similar verbs with frequency less than 100 yield an accuracy of 875 k 74 verbs with frequency between 100 and 1000 accuracy 84 k 76 and verbs with frequency over 1000 accuracy 90 k 82we can see here that different frequency groups yield different classification behaviorhowever the relation is not simple and it is clearly affected by the composition of the frequency group the middle group contains mostly unaccusative and objectdrop verbs which are the verbs with which our experts have the most difficultythis confirms that the class of the verb is the predominant factor in their pattern of errorsnote also that the pattern of accuracy across frequency groupings is not the same as that of the program again indicating qualitative differences in performance between the program and the expertsfinally one possible shortcoming of the above analysis is that the forcedchoice task while maximally comparable to our computational experiments may not be a natural one for human expertsto explore this issue we asked two different experts in lexical semantics to complete the nonforcedchoice electronic questionnaire study again neither author served as one of the expertsin this task in addition to the three verb classes of interest an answer of quototherquot was allowedmaterials consisted of individually randomized lists of 119 target and filler verbs taken from levin electronic index as abovethe targets were again the same 59 verbs used for the machine learning experimentsto avoid unwanted priming of target items the 60 fillers were automatically selected from the set of verbs that do not share any class with any of the senses of the 59 target verbs in levin indexin this task if we take only the target items into account the experts agreed 746 of the time with each other and 86 and 69 with the gold standardthese results show that the forcedchoice and nonforcedchoice task are comparable in accuracy of classification and interjudge agreement on the target classes giving us confidence that the forcedchoice results provide a reasonably stable upper bound for computational experimentsthe work presented here contributes to some central issues in computational linguistics by providing novel insights data and methodology in some cases and by reinforcing some previously established results in othersour research stems from three main hypotheses we discuss the relevant debate on each of these hypotheses and the contribution of our results to each in the following subsectionsargument structure has previously been recognized as one of the most promising candidates for accurate classificationfor example basili pazienza and velardi argue that relational properties of verbstheir argument structureare more informative for classification than their definitional properties their arguments rest on linguistic and psycholinguistic results on classification and language acquisition our results confirm the primary role of argument structure in verb classificationour experimental focus is particularly clear in this regard because we deal with verbs that are quotminimal pairsquot with respect to argument structureby classifying verbs that show the same subcategorizations into different classes we are able to eliminate one of the confounds in classification work created by the fact that subcategorization and argument structure are often covariantwe can infer that the accuracy in our classification is due to argument structure information as subcategorization is the same for all verbsthus we observe that the content of the thematic roles assigned by a verb is crucial for classificationour results further support the assumption that thematic differences across verb classes are apparent not only in differences in subcategorization frames but also in differences in their frequenciesthis connection relies heavily on the hypothesis that lexical semantics and lexical syntax are correlated following levin however this position has been challenged by basili pazienza and velardi and boguraev and briscoe among othersfor example in an attempt to assess the actual completeness and usefulness of the longman dictionary of contemporary english entries boguraev and briscoe found that people assigned a quotchange of possessionquot meaning both to verbs that had dativerelated subcategorization frames and to verbs that did notconversely they also found that both verbs that have a changeofpossession component in their meaning and those that do not could have a dative codethey conclude that the thesis put forth by levin is only partially supportedbasili pazienza and velardi show further isolated examples meant to illustrate that lexical syntax and semantics are not in a onetoone relationmany recent results however seem to converge in supporting the view that the relation between lexical syntax and semantics can be usefully exploited our work in particular underscores the relation between the syntactic manifestations of argument structure and lexical semantic classin light of these recent successes the conclusions in boguraev and briscoe are clearly too pessimisticin fact their results do not contradict the more recent onesfirst of all it is not the case that if an implication holds from argument structure to subcategorization the converse also holdsit comes as no surprise that verbs that do not have any changeofpossession component in their meaning may also show dative shift syntacticallysecondly as boguraev and briscoe themselves note levin statement should be interpreted as a statistical trend and as such boguraev and briscoe results also confirm itthey claim however that in adopting a statistical point of view predictive power is lostour work shows that this conclusion is not appropriate either the correlation is strong enough to be useful to predict semantic classification at least for the argument structures that have been investigatedgiven the manifestation of argument structure in statistical distributions we view corpora especially if annotated with currently available tools as repositories of implicit grammars which can be exploited in automatic verbclassification tasksbesides establishing a relationship between syntactic alternations and underlying semantic properties of verbs our approach extends existing corpusbased learning techniques to the detection and automatic acquisition of argument structureto date most work in this area has focused on learning of subcategorization from unannotated or syntactically annotated text others have tackled the problem of lexical semantic classification but using only subcategorization frequencies as input data specifically these researchers have not explicitly addressed the definition of features to tap directly into thematic role differences that are not reflected in subcategorization distinctionson the other hand when learning of thematic role assignment has been the explicit goal the text has been semantically annotated or external semantic resources have been consulted we extend these results by showing that thematic information can be induced from linguisticallyguided counts in a corpus without the use of thematic role tagging or external resources such as wordnetfinally our results converge with the increasing agreement that corpusbased techniques are fruitful in the automatic construction of computational lexicons providing machine readable dictionaries with complementary reusable resources such as frequencies of argument structuresmoreover these techniques produce data that is easily updated as the information contained in corpora changes all the time allowing for adaptability to new domains or usage patternsthis dynamic aspect could be exploited if techniques such as the one presented here are developed which can work on a rough collection of texts and do not require a carefully balanced corpus or timeconsuming semantic taggingwe conclude from the discussion above that our own work and work of others support our hypotheses concerning the importance of the relation between classes of verbs and the syntactic expression of argument structure in corporain light of this it is instructive to evaluate our results in the context of other work that shares this viewsome related work requires either exact exemplars for acquisition or external precompiled resourcesfor example dorr summarizes a number of automatic classification experiments based on encoding levin alternations directly as symbolic properties of a verb each verb is represented as the binary settings of a vector of possible alternations acquired through a large corpus analysis yielding exemplars of the alternationto cope with sparse data the corpus information is supplemented by syntactic information obtained from the ldoce and semantic information obtained from wordnetthis procedure classifies 95 unknown verbs with 61 accuracydorr also remarks that this result could be improved to 83 if missing ldoce codes were addedwhile dorr work requires finding exact exemplars of the alternation oishi and matsumoto present a method that like ours uses surface indicators to approximate underlying propertiesfrom a dictionary of dependency relations they extract casemarking particles as indicators of the grammatical function properties of the verbs such as subject and objectadverbials indicate aspectual propertiesthe combination of these two orthogonal dimensions gives rise to a classification of japanese verbsother work has sought to combine corpusbased extraction of verbal properties with statistical methods for classifying verbssiegel work on automatic aspectual classification also reveals a close relationship between verbrelated syntactic and semantic informationin this work experiments to learn aspectual classification from linguisticallybased numerical indicators are reportedusing combinations of seven statistical indicators it is possible to learn the distinction between events and states for 739 verb tokens with an improvement of 10 over the baseline and to learn the distinction between culminated and nonculminated events for 308 verb tokens with an improvement of 11 in work on lexical semantic verb classification lapata and brew further support the thesis of a predictive correlation between syntax and semantics in a statistical framework showing that the frequency distributions of subcategorization frames within and across classes can disambiguate the usages of a verb with more than one known lexical semantic classon 306 verbs that are disambiguated by subcategorization frame they achieve 918 accuracy on a task with a 657 baseline for a 76 reduction in error rateon 31 verbs that can take the same subcategorization in different classesmore similar to our situation in that subcategorization alone cannot distinguish the classesthey achieve 839 accuracy compared to a 613 baseline for a 58 reduction in erroraone and mckee working with a much coarsergrained classification of verbs present a technique for predicateargument extraction from multilingual textslike ours their work goes beyond statistics over subcategorizations to include counts over the more directly semantic feature of animacyno numerical evaluation of their results is providedschulte i am walde applies two clustering methods to two types of frequency data for 153 verbs from 30 levin classesone set of experiments uses verb subcategorization frequencies and the other uses subcategorization frequencies plus selectional preferences the best results achieved are a correct classification of 58 verbs out of 153 with a precision of 61 and recall of 36 obtained using only subcategorization frequencieswe calculate that this corresponds to an fscore of 45 with balanced precision and recallthe use of selectional preference information decreases classification performance under either clustering algorithmthe results are somewhat difficult to evaluate further as there is no description of the classes includedalso the method of counting correctness entails that some quotcorrectquot classes may be split across distant clusters so it is unclear how coherent the class behaviour actually ismccarthy proposes a method to identify diathesis alternationsafter learning subcategorization frames based on a parsed corpus selectional preferences are acquired for slots of the subcategorization frames using probability distributions over wordnet classesalternations are detected by testing the hypothesis that given any verb the selectional preferences for arguments occurring in alternating slots will be more similar to each other than those for slots that do not alternatefor instance given a verb participating in the causative alternation its selectional preferences for the subject in an intransitive use and for the object in a transitive use will be more similar to each other than the selectional preferences for these two slots of a verb that does not participate in the causative alternationthis method achieves the best accuracy for the causative and the conative alternations despite sparseness of datamccarthy reports that a simpler measure of selectional preferences based simply on head words yields a lower 63 accuracysince this latter measure is very similar to our caus feature we think that our results would also improve by adopting a similar method of abstracting from head words to classesour work extends each of these approaches in some dimension thereby providing additional support for the hypothesis that syntax and semantics are correlated in a systematic and predictive waywe extend dorr alternationbased automatic classification to a statistical settingby using distributional approximations of indicators of alternations we solve the sparse data problem without recourse to external sources of knowledge such as the ldoce and in addition we are able to learn argument structure alternations using exclusively positive exampleswe improve on the approach of oishi and matsumoto by learning argument structure properties which unlike grammatical functions are not marked morphologically and by not relying on external sources of knowledgefurthermore in contrast to siegel and lapata and brew our method applies successfully to previously unseen wordsie test cases that were not represented in the training setquot this is a very important property of lexical acquisition algorithms to be used for lexicon organization as their main interest lies in being applied to unknown wordson the other hand our approach is similar to the approaches of siegel and lapata and brew in attempting to learn semantic notions from distributions of indicators that can be gleaned from a textin our case we are trying to learn argument structure a finergrained classification than the dichotomic distinctions studied by siegellike lapata and brew three of our indicatorstrans vbn passare based on the assumption that distributional differences in subcategorization frames are related to underlying verb class distinctionshowever we also show that other syntactic indicatorscaus and animcan be devised that tap directly into the argument structure of a verbunlike schulte i am walde we find the use of these semantic features helpful in classificationusing only trans and its related features vbn and pass we achieve only 55 accuracy in comparison to 698 using the full set of featuresthis can perhaps be seen as support for our hypothesis that argument structure is the right level of representation for verb class distinctions since it appears that our features that capture thematic differences are useful in classification while schulte i am walde selectional restriction features were notaone and mckee also use features that are intended to tap into both subcategorization and thematic role distinctionsfrequencies of the transitive use and animate subject usein our task we show that subject animacy can be profitably approximated solely with pronoun counts avoiding the need for reference to external sources of semantic information used by aone and mckeein addition our work extends theirs in investigating much finergrained verb classes and in classifying verbs that have multiple argument structureswhile aone and mckee define each of their classes according to a single argument structure we demonstrate the usefulness of syntactic features that capture relations across different argument structures of a single verbfurthermore while aone and mckee and others look at relative frequency of subcategorization frames or relative frequency of a property of nps within a particular grammatical function we also look at the paradigmatic relations across a text between thematic arguments in different alternations mccarthy shows that a method very similar to ours can be used for identifying alternationsher qualitative results confirm however what was argued in section 2 above counts that tap directly into the thematic assignments are necessary to fully identify a diathesis alternationin fact on close inspection mccarthy method does not distinguish between the inducedaction alternation and the causativeinchoative alternation thus her method does not discriminate two of our classesit is likely that a combination of our method which makes the necessary thematic distinctions and her more sophisticated method of detecting alternations would give very good resultsthe classification results show that our method is powerful and suited to the classification of unknown verbshowever we have not yet addressed the problem of verbs that can have multiple classificationswe think that many cases of ambiguous classification of the lexical entry for a verb can be addressed with the notion of intersective sets introduced by dang et al this is an important concept which proposes that quotregularquot ambiguity in classificationie sets of verbs that have the same multiway classifications according to levin can be captured with a finergrained notion of lexical semantic classesthus subsets of verbs that occur in the intersection of two or more levin classes form in themselves a coherent semantic classextending our work to exploit this idea requires only defining the classes appropriately the basic approach will remain the samegiven the current demonstration of our method on finegrained classes that share subcategorization alternations we are optimistic regarding its future performance on intersective setsbecause we assume that thematic properties are reflected in alternations of argument structure our features require searching for relations across occurrences of each verbthis motivated our initial experimental focus on verb typeshowever when we turn to consider ambiguity we must also address the problem that individual instances of verbs may come from different classes and we may want to classify the individual tokens of a verbin future research we plan to extend our method to the case of ambiguous tokens by experimenting with the combination of several sources of information the classification of each instance will be a function of a bias for the verb type but also of features of the usage of the instance being classified finally corpusbased learning techniques collect statistical information related to language use and are a good starting point for studying human linguistic performancethis opens the way to investigating the relation of linguistic data in text to people linguistic behaviour and usefor example merlo and stevenson show that contrary to the naive assumption speakers preferences in syntactic disambiguation are not simply directly related to frequency thus the kind of corpus investigation we are advocatingfounded on indepth linguistic analysisholds promise for building more natural nlp systems which go beyond the simplest assumptions and tie together statistical computational linguistic results with experimental psycholinguistic datain this paper we have presented an indepth case study in which we investigate machine learning techniques for automatically classifying a set of verbs into classes determined by their argument structureswe focus on the three major classes of optionally intransitive verbs in english which cannot be discriminated by their subcategorizations and therefore require distinctive features that are sensitive to the thematic properties of the verbswe develop such features and automatically extract them from very large syntactically annotated corporaresults show that a small number of linguistically motivated lexical features are sufficient to achieve a 698 accuracy rate in a threeway classification task with a baseline performance of 339 for which the best performance achieved by a human expert is 865returning to our original questions of what can and need be learned about the relational properties of verbs we conclude that argument structure is both a highly useful and learnable aspect of verb knowledgewe observe that relevant semantic properties of verb classes may be successfully approximated through countable syntactic featuresin spite of noisy data the lexical properties of interest are reflected in the corpora robustly enough to positively contribute to classificationwe remark however that deep linguistic analysis cannot be eliminatedin our approach it is embedded in the selection of the features to countspecifically our features are derived through a detailed analysis of the differences in thematic role assignments across the verb classes under investigationthus an important contribution of the work is the proposed mapping between the thematic assignment properties of the verb classes and the statistical distributions of their surface syntactic propertieswe think that using such linguistically motivated features makes the approach very effective and easily scalable we report a 54 reduction in error rate using only five features that are readily extractable from automatically annotated corporawe gratefully acknowledge the financial support of the following organizations the swiss nsf the united states nsf the canadian nserc the university of toronto and the information sciences council of rutgers universitymuch of this research was carried out while pm was a visiting scientist at ircs university of pennsylvania and while ss was a faculty member at rutgers university both of whose generous and supportive environments were of great benefit to uswe thank martha palmer michael collins natalia kariaeva kamin whitehouse julie boland kiva dickinson and three anonymous reviewers for their helpful comments and suggestions and for their contributions to this researchwe also greatly thank our experts for the gracious contribution of their time in answering our electronic questionnairethe following three tables contain the overall frequency and the normalized feature values for each of the 59 verbs in our experimental set
J01-3003
automatic verb classification based on statistical distributions of argument structureautomatic acquisition of lexical knowledge is critical to a wide range of natural language processing tasksespecially important is knowledge about verbs which are the primary source of relational information in a sentencethe predicateargument structure that relates an action or state to its participants in this work we report on supervised learning experiments to automatically classify three major types of english verbs based on their argument structurespecifically the thematic roles they assign to participantswe use linguisticallymotivated statistical indicators extracted from large annotated corpora to train the classifier achieving 698 accuracy for a task whose baseline is 34 and whose expertbased upper bound we calculate at 865a detailed analysis of the performance of the algorithm and of its errors confirms that the proposed features capture properties related to the argument structure of the verbsour results validate our hypotheses that knowledge about thematic relations is crucial for verb classification and that it can be gleaned from a corpus by automatic meanswe thus demonstrate an effective combination of deeper linguistic knowledge with the robustness and scalability of statistical techniqueswe work with a decision tree and selected linguistic cues to classify english verbs into three classes unaccusative unergative and objectdrop
a machine learning approach to coreference resolution of noun phrases in this paper we present a learning approach to coreference resolution of noun phrases in unrestricted text the approach learns from a small annotated corpus and the task includes resolving not just a certain type of noun phrase but rather general noun phrases it also does not restrict the entity types of the noun phrases that is coreference is assigned whether they are of quotorganizationquot quotpersonquot or other types we evaluate our approach on common data sets and obtain encouraging results indicating that on the general noun phrase coreference task the learning approach holds promise and achieves accuracy comparable to that of nonlearning approaches our system is the first learningbased system that offers performance comparable to that of stateoftheart nonlearning systems on these data sets in this paper we present a learning approach to coreference resolution of noun phrases in unrestricted textthe approach learns from a small annotated corpus and the task includes resolving not just a certain type of noun phrase but rather general noun phrasesit also does not restrict the entity types of the noun phrases that is coreference is assigned whether they are of quotorganizationquot quotpersonquot or other typeswe evaluate our approach on common data sets and obtain encouraging results indicating that on the general noun phrase coreference task the learning approach holds promise and achieves accuracy comparable to that of nonlearning approachesour system is the first learningbased system that offers performance comparable to that of stateoftheart nonlearning systems on these data setscoreference resolution is the process of determining whether two expressions in natural language refer to the same entity in the worldit is an important subtask in natural language processing systemsin particular information extraction systems like those built in the darpa message understanding conferences have revealed that coreference resolution is such a critical component of ie systems that a separate coreference subtask has been defined and evaluated since muc6 in this paper we focus on the task of determining coreference relations as defined in muc6 and muc7 specifically a coreference relation denotes an identity of reference and holds between two textual elements known as markables which can be definite noun phrases demonstrative noun phrases proper names appositives subnoun phrases that act as modifiers pronouns and so onthus our coreference task resolves general noun phrases and is not restricted to a certain type of noun phrase such as pronounsalso we do not place any restriction on the possible candidate markables that is all markables whether they are quotorganizationquot quotpersonquot or other entity types are consideredthe ability to link coreferring noun phrases both within and across sentences is critical to discourse analysis and language understanding in generalsystem architecture of natural language processing pipelinewe adopt a corpusbased machine learning approach to noun phrase coreference resolutionthis approach requires a relatively small corpus of training documents that have been annotated with coreference chains of noun phrasesall possible markables in a training document are determined by a pipeline of languageprocessing modules and training examples in the form of feature vectors are generated for appropriate pairs of markablesthese training examples are then given to a learning algorithm to build a classifierto determine the coreference chains in a new document all markables are determined and potential pairs of coreferring markables are presented to the classifier which decides whether the two markables actually coreferwe give the details of these steps in the following subsectionsa prerequisite for coreference resolution is to obtain most if not all of the possible markables in a raw input textto determine the markables a pipeline of natural language processing modules is used as shown in figure 1they consist of tokenization sentence segmentation morphological processing partofspeech tagging noun phrase identification named entity recognition nested noun phrase extraction and semantic class determinationas far as coreference resolution is concerned the goal of these nlp modules is to determine the boundary of the markables and to provide the necessary information about each markable for subsequent generation of features in the training examplesour partofspeech tagger is a standard statistical tagger based on the hidden markov model similarly we built a statistical hmmbased noun phrase identification module that determines the noun phrase boundaries solely based on the partofspeech tags assigned to the words in a sentencewe also implemented a module that recognizes mucstyle named entities that is organization person location date time money and percentour named entity recognition module uses the hmm approach of bikel schwartz and weischedel which learns from a tagged corpus of named entitiesthat is our partofspeech tagger noun phrase identification module and named entity recognition module are all based on hmms and learn from corpora tagged with parts of speech noun phrases and named entities respectivelynext both the noun phrases determined by the noun phrase identification module and the named entities are merged in such a way that if the noun phrase overlaps with a named entity the noun phrase boundaries will be adjusted to subsume the named entitythe nested noun phrase extraction module subsequently accepts the noun phrases and determines the nested phrases for each noun phrasethe nested noun phrases are divided into two groups finally the markables needed for coreference resolution are the union of the noun phrases named entities and nested noun phrases foundfor markables without any named entity type semantic class is determined by the semantic class determination modulemore details regarding this module are given in the description of the semantic class agreement featureto achieve acceptable recall for coreference resolution it is most critical that the eligible candidates for coreference be identified correctly in the first placein order to test our system effectiveness in determining the markables we attempted to match the markables generated by our system against those appearing in the coreference chains annotated in 100 sgml documents a subset of the training documents available in muc6we found that our system is able to correctly identify about 85 of the noun phrases appearing in coreference chains in the 100 annotated sgml documentsmost of the unmatched noun phrases are of the following types to build a learningbased coreference engine we need to devise a set of features that is useful in determining whether two markables corefer or notin addition these features must be generic enough to be used across different domainssince the muc6 and muc7 tasks define coreference guidelines for all types of noun phrases and different types of noun phrases behave differently in terms of how they corefer our features must be able to handle this and give different coreference decisions based on different types of noun phrasesin general there must be some features that indicate the type of a noun phrasealtogether we have five features that indicate whether the markables are definite noun phrases demonstrative noun phrases pronouns or proper namesthere are many important knowledge sources useful for coreferencewe wanted to use those that are not too difficult to computeone important factor is the distance between the two markablesmcenery tanaka and botley have done a study on how distance affects coreference particularly for pronounsone of their conclusions is that the antecedents of pronouns do exhibit clear quantitative patterns of distributionthe distance feature has different effects on different noun phrasesfor proper names locality of the antecedents may not be so importantwe include the distance feature so that the learning algorithm can best decide the distribution for different classes of noun phrasesthere are other features that are related to the gender number and semantic class of the two markablessuch knowledge sources are commonly used for the task of determining coreferenceour feature vector consists of a total of 12 features described below and is derived based on two extracted markables i and j where i is the potential antecedent and j is the anaphorinformation needed to derive the feature vectors is provided by the pipeline of languageprocessing modules prior to the coreference engine falseif the string of i matches the string of j return true else return falsewe first remove articles and demonstrative pronouns from the strings before performing the string comparisontherefore the license matches this license that computer matches computer5definite noun phrase feature its possible values are true or falsein our definition a definite noun phrase is a noun phrase that starts with the word thefor example the car is a definite noun phraseif j is a definite noun phrase return true else return false true or falsea demonstrative noun phrase is one that starts with the word this that these or thoseif is a demonstrative noun phrase then return true else return false7number agreement feature its possible values are true or falseif i and j agree in number the value is true otherwise falsepronouns such as they and them are plural while it him and so on are singularthe morphological root of a noun is used to determine whether it is singular or plural if the noun is not a pronounare true false or unknownin our system we defined the following semantic classes quotfemalequot quotmalequot quotpersonquot quotorganizationquot quotlocationquot quotdatequot quottimequot quotmoneyquot quotpercentquot and quotobjectquot these semantic classes are arranged in a simple isa hierarchyeach of the quotfemalequot and quotmalequot semantic classes is a subclass of the semantic class quotpersonquot while each of the semantic classes quotorganizationquot quotlocationquot quotdatequot quottimequot quotmoneyquot and quotpercentquot is a subclass of the semantic class quotobjectquot each of these defined semantic classes is then mapped to a wordnet synset for example quotmalequot is mapped to the second sense of the noun male in wordnet quotlocationquot is mapped to the first sense of the noun location and so onthe semantic class determination module assumes that the semantic class for every markable extracted is the first sense of the head noun of the markablesince wordnet orders the senses of a noun by their frequency this is equivalent to choosing the most frequent sense as the semantic class for each nounif the selected semantic class of a markable is a subclass of one of our defined semantic classes c then the semantic class of the markable is c else its semantic class is quotunknownquot the semantic classes of markables i and j are in agreement if one is the parent of the other or they are the same the value returned for such cases is trueif the semantic classes of i and j are not the same return falseif either semantic class is quotunknownquot then the head noun strings of both markables are comparedif they are the same return true else return unknown on the named entity typefor i and j that are dates by using string comparison the day month and year values are extracted and comparedif they match then j is an alias of ifor i and j that are quotpersonquot such as mr simpson and bent simpson the last words of the noun phrases are compared to determine whether one is an alias of the otherfor organization names the alias function also checks for acronym match such as ibm and international business machines corpin this case the longer string is chosen to be the one that is converted into the acronym formthe first step is to remove all postmodifiers such as corp and ltd then the acronym function considers each word in turn and if the first letter is capitalized it is used to form the acronymtwo variations of the acronyms are produced one with a period after each letter and one without12appositive feature its possible values are true or falseif j is in apposition to i return true else return falsefor example the markable the chairman of microsoft corp is in apposition to bill gates in the sentence bill gates the chairman of microsoft corp our system determines whether j is a possible appositive construct by first checking for the existence of verbs and proper punctuationlike the above example most appositives do not have any verb and an appositive is separated by a comma from the most immediate antecedent i to which it refersfurther at least one of i and j must be a proper namethe muc6 and muc7 coreference task definitions are slightly differentin muc6 j needs to be a definite noun phrase to be an appositive while both indefinite and definite noun phrases are acceptable in muc7as an example table 1 shows the feature vector associated with the antecedent i frank newman and the anaphor j vice chairman in the following sentence feature vector of the markable pair because of capitalization markables in the headlines of muc6 and muc7 documents are always considered proper names even though some are notour system solves this inaccuracy by first preprocessing a headline to correct the capitalization before passing it into the pipeline of nlp modulesonly those markables in the headline that appear in the text body as proper names have their capitalization changed to match those found in the text bodyall other headline markables are changed to lowercaseconsider a coreference chain al a2 a3 a4 found in an annotated training documentonly pairs of noun phrases in the chain that are immediately adjacent are used to generate the positive training examplesthe first noun phrase in a pair is always considered the antecedent while the second is the anaphoron the other hand negative training examples are extracted as followsbetween the two members of each antecedentanaphor pair there are other markables extracted by our languageprocessing modules that either are not found in any coreference chain or appear in other chainseach of them is then paired with the anaphor to form a negative examplefor example if markables a b and b1 appear between al and a2 then the negative examples are a a2 b a2 and b1 a2note that a and b do not appear in any coreference chain while b1 appears in another coreference chainfor an annotated noun phrase in a coreference chain in a training document the same noun phrase must be identified as a markable by our pipeline of languageprocessing modules before this noun phrase can be used to form a feature vector for use as a training examplethis is because the information necessary to derive a feature vector such as semantic class and gender is computed by the languageprocessing modulesif an annotated noun phrase is not identified as a markable it will not contribute any training exampleto see more clearly how training examples are generated consider the following four sentences each sentence is shown twice with different noun phrase boundariessentences labeled are obtained directly from part of the training documentthe letters in the subscripts uniquely identify the coreference chains while the numbers identify the noun phrasesnoun phrases in sentences labeled are extracted by our languageprocessing modules and are also uniquely identified by numeric subscriptslet us consider chain e which is about the unionthere are three noun phrases that corefer and our system managed to extract the boundaries that correspond to all of them 7 matches with ei 13 with e2 and 22 with e3there are two positive training examples formed by 13 22 and 7 13noun phrases between 7 and 13 that do not corefer with 13 are used to form the negative examplesthe negative examples are 9 13 io 13 n 13 and 12 13negative examples can also be found similarly between 13 22as another example neither noun phrase in chain d di and d2 matches with any machineextracted noun phrase boundariesin this case no positive or negative example is formed for noun phrases in chain d the next step is to use a machine learning algorithm to learn a classifier based on the feature vectors generated from the training documentsthe learning algorithm used in our coreference engine is c5 which is an updated version of c45 c5 is a commonly used decision tree learning algorithm and thus it may be considered as a baseline method against which other learning algorithms can be comparedbefore determining the coreference chains for a test document all possible markables need to be extracted from the documentevery markable is a possible anaphor and every markable before the anaphor in document order is a possible antecedent of the anaphor except when the anaphor is nestedif the anaphor is a child or nested markable then its possible antecedents must not be any markable with the same root markable as the current anaphorhowever the possible antecedents can be other root markables and their children that are before the anaphor in document orderfor example consider the two root markables mr tom daughter and his daughter eyes appearing in that order in a test documentthe possible antecedents of his cannot be his daughter or his daughter eyes but can be mr tom or mr tom daughterthe coreference resolution algorithm considers every markable j starting from the second markable in the document to be a potential candidate as an anaphorfor each j the algorithm considers every markable i before j as a potential antecedentfor each pair i and j a feature vector is generated and given to the decision tree classifiera coreferring antecedent is found if the classifier returns truethe algorithm starts from the immediately preceding markable and proceeds backward in the reverse order of the markables in the document until there is no remaining markable to test or an antecedent is foundas an example consider the following text with markables already detected by the nlp modules 73 candidacy is being championed by 74 including 76 boss75 77 78 of 7980 currently is 81 to 8283 and 84 have been considered 85 of 87 exchanges86 while 88 and 90 exchanges89 have often fought with 91we will consider how the boldfaced chains are detectedtable 2 shows the pairs of markables tested for coreference to form the chain for ms washingtonhershems washingtonwhen the system considers the anaphor 76 all preceding phrases except 75 are tested to see whether they corefer with it75 is not tested because 76 is its nested noun phrasefinally the decision tree determines that the noun phrase 73 corefers with 76in table 2 we only show the system considering the three anaphors 76 80 and 83 in that orderwe use the same method to generate coreference chains for both muc6 and muc7 except for the followingfor muc7 because of slight changes in the coreference task definition we include a filtering module to remove certain coreference chainsthe task definition states that a coreference chain must contain at least one element that is a head noun or a name that is a chain containing only prenominal modifiers is removed by the filtering modulein order to evaluate the performance of our learning approach to coreference resolution on common data sets we utilized the annotated corpora and scoring programs from muc6 and muc7 which assembled a set of newswire documents annotated with coreference chainsalthough we did not participate in either muc6 or muc7 we were able to obtain the training and test corpora for both years from the muc organizers for research purposesto our knowledge these are the only publicly available annotated corpora for coreference resolutionfor muc6 30 dryrun documents annotated with coreference information were used as the training documents for our coreference enginethere are also 30 annotated training documents from muc7the total size of the 30 training documents is close to 12400 words for muc6 and 19000 words for muc7there are altogether 20910 training examples used for muc6 of which only 65 are positive examples in muc6 2 after training a separate classifier for each year we tested the performance of each classifier on its corresponding test corpusfor muc6 the c5 pruning confidence is set at 20 and the minimum number of instances per leaf node is set at 5for muc7 the pruning confidence is 60 and the minimum number of instances is 2the parameters are determined by performing 10fold crossvalidation on the whole training set for each muc yearthe possible pruning confidence values that we tried are 10 20 40 60 80 and 100 and for minimum instances we tried 2 5 10 15 and 20thus a total of 30 crossvalidation runs were executedone advantage of using a decision tree learning algorithm is that the resulting decision tree classifier can be interpreted by humansthe decision tree generated for muc6 shown in figure 2 seems to encapsulate a reasonable rule of thumb that matches our intuitive linguistic notion of when two noun phrases can coreferit is also interesting to note that only 8 out of the 12 available features in the training examples are actually used in the final decision tree builtmuc6 has a standard set of 30 test documents which is used by all systems that participated in the evaluationsimilarly muc7 has a test corpus of 20 documentswe compared our system muc6 and muc7 performance with that of the systems that took part in muc6 and muc7 respectivelywhen the coreference engine is given new test documents its output is in the form of sgml files with the coreference chains properly annotated according to the guidelines3 we then used the scoring programs the decision tree classifier learned for muc6 for the respective years to generate the recall and precision scores for our coreference engineour coreference engine achieves a recall of 586 and a precision of 673 yielding a balanced fmeasure of 626 for muc6for muc7 the recall is 561 the precision is 655 and the balanced fmeasure is 6044 we plotted the scores of our coreference engine against the official test scores of the other systems in figure 3 and figure 4we also plotted the learning curves of our coreference engine in figure 5 and figure 6 showing its accuracy averaged over three random trials when trained on 1 2 3 4 5 10 15 20 25 and 30 training documentsthe learning curves indicate that our coreference engine achieves its peak performance with about 25 training documents or about 11000 to 17000 words of training documentsthis number of training documents would generate tens of thousands of training examples sufficient for the decision tree learning algorithm to learn a good classifierat higher numbers of training documents our system seems to start overfitting the training datafor example on muc7 data training on the full set of 30 training documents results in a more complex decision treeour system scores are in the upper region of the muc6 and muc7 systemswe performed a simple onetailed paired sample ttest at significance level p 005 to determine whether the difference between our system fmeasure score and each of the other systems fmeasure score on the test documents is statistically significant5 we found that at the 95 significance level our system performed better than three muc6 systems and as well as the rest of the muc6 systemsusing the coreference scores of muc7 systems and our system same significance level our system performed better than four muc7 systems and as well as the rest of the muc7 systemsour result is encouraging since it indicates that a learning approach using relatively shallow features can achieve scores comparable to those of systems built using nonlearning approachesit should be noted that the accuracy of our coreference resolution engine depends to a large extent on the performance of the nlp modules that are executed before the coreference engineour current learningbased hmm named entity recognition module is trained on 318 documents tagged with named entities and its score on the muc6 named entity task for the 30 formal test documents is only 889 which is not considered very high by muc6 standardsfor example our named entity recognizer could not identify the two named entities usair and piedmont in the expression usair and piedmont but instead treat them as one single named entityour partofspeech tagger achieves 96 accuracy while the accuracy of noun phrase identification is above 90one factor that affects the performance of a machine learning approach is the set of features usedit is interesting to find out how useful each of our 12 features is in the muc6 and muc7 coreference tasksone way to do this is to train and test using just one feature at a timetable 3 and table 4 show the results of the experimentfor both muc6 and muc7 the 3 features that give nonzero recall and precision are alias str_match and appositivethe 12 features can be divided into unary and binary featuresthe unary features are i_pronoun j_pronoun def_np and dem_np while the rest are binary in natureall the unary features score an fmeasure of 0the binary features with 0 fmeasure are dist proper_name gender semclass and numberthe alias appositive and str_match features give nonzero fmeasureall these features give rather high precision scores since these features are highly informative we were curious to see how much they contribute to our muc6 and muc7 results of 626 and 604 respectivelysystems alias_str and alias_str_appos in table 3 and table 4 show the results of the experimentin terms of absolute fmeasure the difference between using these three features and using all features is 23 for muc6 and 1 for muc7 in other words the other nine features contribute just 23 and 1 more for each of the muc yearsthese nine features will be the first ones to be considered for pruning away by the c5 algorithmfor example four features namely semclass proper_name def_np and dem_np are not used in the muc6 tree shown in figure 2figure 7 shows the distribution of the test cases over the five positive leaf nodes of the muc6 treefor example about 663 of all distribution of test examples from the 30 muc6 test documents for positive leaf nodes of the muc6 tree the test examples that are classified positive go to the quotif str_matchquot branch of the treeother baseline systems that are used are one_chain one_wrd and hd_wrd for one_chain all markables formed one chainin one_wrd markables corefer if there is at least one common wordin hd_wrd markables corefer if their head words are the samethe purpose of onechain is to determine the maximum recall our system is capable ofthe recall level here indirectly measures how effective the noun phrase identification module isboth one_wrd and hd_wrd are less stringent variations of str_matchthe performance of one_wrd is the worsthd_wrd offers better recall compared to str_match but poorer precisionhowever its fmeasure is comparable to that of str_matchthe score of the coreference system at the university of massachusetts which uses c45 for coreference resolution is shown in table 3resolve is shown because among the muc6 systems it is the only machine learningbased system that we can directly compare tothe other muc6 systems were not based on a learning approachalso none of the systems in muc7 adopted a learning approach to coreference resolution resolve score is not high compared to scores attained by the rest of the muc6 systemsin particular the system recall is relatively lowour system score is higher than that of resolve and the difference is statistically significantthe resolve system is described in three papers mccarthy and lehnert fisher et al and mccarthy as explained in mccarthy the reason for this low recall is that resolve takes only the quotrelevant entitiesquot and quotrelevant referencesquot as input where the relevant entities and relevant references are restricted to quotpersonquot and quotorganizationquot in addition because of limitations of the noun phrase detection module nested phrases are not extracted and therefore do not take part in coreferencenested phrases can include prenominal modifiers possessive pronouns and so forththerefore the number of candidate markables to be used for coreference is smallon the other hand the markables extracted by our system include nested noun phrases mucstyle named entity types and other types not defined by mucthese markables will take part in coreferenceabout 3600 toplevel markables are extracted from the 30 muc6 test documents by our systemas detected by our nlp modules only about 35 of these 3600 phrases are quotpersonquot and quotorganizationquot entities and referencesconcentrating on just these types has thus affected the overall recall of the resolve systemresolve way of generating training examples also differs from our system instances are created for all possible pairings of quotrelevant entitiesquot and quotrelevant referencesquot instead of our system method of stopping at the first coreferential noun phrase when traversing back from the anaphor under considerationwe implemented resolve way of generating training examples and the results are reported in table 3 and table 4for muc7 there is no drop in fmeasure for muc6 the fmeasure dropped slightlyresolve makes use of 39 features considerably more than our system 12 featuresresolve feature set includes the two highly informative features alias and str_matchresolve does not use the appositive featurein order to determine the major classes of errors made by our system we randomly chose five test documents from muc6 and determined the coreference links that were either missing or spurious in these sample documentsmissing links result in recall errors spurious links result in precision errorsbreakdowns of the number of spurious and missing links are shown in table 5 and table 6 respectivelythe following two subsections describe the errors in more detailthis section describes the five major types of errors summarized in table 5 in more detail511 prenominal modifier string matchthis class of errors occurs when some strings of the prenominal modifiers of two markables match by surface string comparison and thus by the c5 decision tree in figure 2 the markables are treated as coreferringhowever the entire markable actually does not coreferthe nested noun phrase extraction module is responsible for obtaining the possible prenominal modifiers from a noun phrasein the noun phrase extraction module mistakenly extracted 1 and 2 which are not prenominal modifiersbecause of string match 1 and 2 incorrectly coreferin 2 was correctly extracted as a prenominal modifier but incorrectly corefers with i by string matchcouncil on foreign relations is expected to be named 1 for political affairs former sen tim wirth is expected to get a newly created 2 post for global affairs which would include refugees drugs and environmental issues when the surface strings of two markables match and thus by the c5 decision tree in figure 2 they are treated as coreferringhowever they actually refer to different entities and should not coreferin i actually refers to the entity the house energy and commerce committee and 2 refers to the senate finance committee therefore they should not coreferin the two instances of chief executive officer refer to two different persons namely allan laufgraben and milton petrie and again should not corefer made by the noun phrase identification modulein may and june are incorrectly grouped together by the noun phrase identification module as one noun phrase that is may junethis markable then incorrectly causes the appositive feature to be true which results in classifying the pair as coreferentialin fact 2 should not be in apposition to ihowever we classified this error as a noun phrase identification error because it is the first module that causes the errorin the noun phrase module extracted metaphor inc instead of metaphor inc unitthis causes 2 to refer to metaphor inc instead of metaphor inc unit is incorrectly treated as being in apposition to the antecedent and therefore causes the noun phrases to coreferthe precision scores obtained when using the appositive feature alone are shown in table 3 and table 4 which suggest that the module can be improved furtherexamples where apposition determination is incorrect are shown in matthew mchugh i and 2 and transition official gus speth for the director of the agency for international development metaphor a software subsidiary that ibm purchased in 1991 also named 2 2 currently a senior vice president president and chief executive officer515 errors in alias determinationthis class of errors occurs when the anaphor is incorrectly treated as an alias of the antecedent thus causing the noun phrase pair to coreferin the two phrases i and 2 corefer because the alias feature is incorrectly determined to be true consuela washington a longtime i staffer and an expert in securities laws is a leading candidate to be chairwoman of the securities and exchange commission in the clinton administration ms washington candidacy is being championed by several powerful lawmakers including her boss chairman john dingell of 2this subsection describes the six major classes of errors summarized in table 6 in more detail521 inadequacy of current surface featuresthis class of errors is due to the inadequacy of the current surface features because they do not have information about other words and other knowledge sources that may provide important clues for coreferenceas a result the set of shallow features we used is unable to correctly classify the noun phrases in the examples below as coreferringexample illustrates why resolving 2 is difficulti securities exchanges banks and futures exchanges are all possible antecedents of 2 and the feature set must include more information to be able to pick the correct onethe conjunction and in and was named in are important cues to determine coreferencein addition it may also be possible to capture noun phrases in predicate constructions like where i is the subject and 2 is the object tion 513the noun phrase identification module may extract noun phrases that do not match the phrases in the coreference chain therefore causing missing links and recall error523 errors in semantic class determinationthese errors are caused by the wrong assignment of semantic classes to wordsfor example i should be assigned quotorganizationquot but it is assigned quotunknownquot in and 2 should be assigned quotdatequot instead of quotunknownquot in however correcting these classes will still not because the noun phrases in the examples to coreferthis is because the values of the semclass feature in the training examples are extremely noisy a situation caused largely by our semantic class determination modulein many of the negative training examples although the noun phrases are assigned the same semantic classes these assignments do not seem to be correctsome examples are and a better algorithm for assigning semantic classes and a more refined semantic class hierarchy are needed separately mgm said it completed a previously announced financial restructuring designed to clean up its balance sheetremoving 900 million in bank debt from mgm books and reducing its debttoequity ratio to i from 2with a view toward a future sale of the companymccarthy has also performed an analysis of errors while conducting an evaluation on the muc5 english joint venture corpusa large number of the spurious links are caused by what he terms quotfeature ambiguityquot which means that feature values are not computed perfectlyas seen in table 5 our string match feature accounts for most of the spurious linksalso seven of the spurious links are caused by alias and apposition determinationas with resolve quotfeature ambiguityquot is the main source of precision errorsfor resolve a large number of the missing links are caused by quotincomplete semantic knowledgequot and quotunused featuresquot for our system the errors due to the inadequacy of surface features and semantic class determination problems account for about 75 of the missing linksquotunused featuresquot means that some of the features or combinations of features that are needed to classify pairs of phrases as coreferential are not present in the decision trees similarly the inadequacy of our system surface features means that the current feature set may not be enough and more information sources should be addedbecause a detailed error analysis of resolve would require not only its muc6 response file but also the output of its various components we cannot perform the same error analysis that we did for our system on resolvethere is a long tradition of work on coreference resolution within computational linguistics but most of it was not subject to empirical evaluation until recentlyamong the papers that have reported quantitative evaluation results most are not based on learning from an annotated corpus to our knowledge the research efforts of aone and bennett ge hale and charniak kehler mccarthy and lehnert fisher et al and mccarthy are the only ones that are based on learning from an annotated corpusge hale and charniak used a statistical model for resolving pronouns whereas we used a decision tree learning algorithm and resolved general noun phrases not just pronounssimilarly kehler used maximum entropy modeling to assign a probability distribution to alternative sets of coreference relationships among noun phrase entity templates whereas we used decision tree learningthe work of aone and bennett mccarthy and lehnert fisher et al and mccarthy employed decision tree learningthe resolve system is presented in mccarthy and lehnert fisher et al and mccarthy mccarthy and lehnert describe how resolve was tested on the muc5 english joint ventures corpusit used a total of 8 features 3 of which were specific to the ejv domainfor example the feature jvchildi determined whether i referred to a joint venture formed as the result of a tieupmccarthy describes how the original resolve for muc5 ejv was improved to include more features 8 of which were domain specific and 30 of which were domain independentfisher et al adapted resolve to work in muc6the features used were slightly changed for this domainof the original 30 domainindependent features 27 were usedthe 8 domainspecific features were completely changed for the muc6 taskfor example jvchildi was changed to childi to decide whether i is a quotunitquot or a quotsubsidiaryquot of a certain parent companyin contrast to resolve our system makes use of a smaller set of 12 features and as in aone and bennett system the features used are generic and applicable across domainsthis makes our coreference engine a domainindependent modulealthough aone and bennett system also made use of decision tree learning for coreference resolution they dealt with japanese texts and their evaluation focused only on noun phrases denoting organizations whereas our evaluation which dealt with english texts encompassed noun phrases of all types not just those denoting organizationsin addition aone and bennett evaluated their system on noun phrases that had been correctly identified whereas we evaluated our coreference resolution engine as part of a total system that first has to identify all the candidate noun phrases and has to deal with the inevitable noisy data when mistakes occur in noun phrase identification and semantic class determinationthe contribution of our work lies in showing that a learning approach when evaluated on common coreference data sets is able to achieve accuracy competitive with that of stateoftheart systems using nonlearning approachesit is also the first machine learningbased system to offer performance comparable to that of nonlearning approachesfinally the work of cardie and wagstaff also falls under the machine learning approachhowever they used unsupervised learning and their method did not require any annotated training datatheir clustering method achieved a balanced fmeasure of only 536 on muc6 test datathis is to be expected supervised learning in general outperforms unsupervised learning since a supervised learning algorithm has access to a richer set of annotated data to learn fromsince our supervised learning approach requires only a modest number of annotated training documents to achieve good performance we argue that the better accuracy achieved more than justifies the annotation effort incurredin this paper we presented a learning approach to coreference resolution of noun phrases in unrestricted textthe approach learns from a small annotated corpus and the task includes resolving not just pronouns but general noun phraseswe evaluated our approach on common data sets namely the muc6 and muc7 coreference corporawe obtained encouraging results indicating that on the general noun phrase coreference task the learning approach achieves accuracy comparable to that of nonlearning approachesthis paper is an expanded version of a preliminary paper that appeared in the proceedings of the 1999 joint sigdat conference on empirical methods in natural language processing and very large corporawe would like to thank the muc organizers who made available to us the muc6 and muc7 data sets without which this work would have been impossiblewe also thank beth sundheim for helpful comments on an earlier version of this paper and hai leong chieu for his implementation of the hmmbased named entity recognition module
J01-4004
a machine learning approach to coreference resolution of noun phrasesin this paper we present a learning approach to coreference resolution of noun phrases in unrestricted textthe approach learns from a small annotated corpus and the task includes resolving not just a certain type of noun phrase but rather general noun phrasesit also does not restrict the entity types of the noun phrases that is coreference is assigned whether they are of organization person or other typeswe evaluate our approach on common data sets and obtain encouraging results indicating that on the general noun phrase coreference task the learning approach holds promise and achieves accuracy comparable to that of nonlearning approachesour system is the first learningbased system that offers performance comparable to that of stateoftheart nonlearning systems on these data setswe include all noun phrases returned by their np identifier and report an fmeasure of 626 for muc6 data and 604 for muc7 datawe construct this entitymention graph by learning to decide for each mention which preceding mention if any belongs in the same equivalence class this approach is commonly called the pairwise coreference model
a critique and improvement of an evaluation metric for text segmentation metric initially proposed by beeferman berger and lafferty is becoming the standard measure for assessing text segmentation algorithms however a theoretical analysis of the metric finds several problems the metric penalizes false negatives more heavily than false positives overpenalizes near misses and is affected by variation in segment size dis we propose a simple modification to the that remedies these problems this new metriccalled windowdiffmoves a fixedsized window across the text and penalizes the algorithm whenever the number of boundaries within the window does not match the true number of boundaries for that window of text the pk evaluation metric initially proposed by beeferman berger and lafferty is becoming the standard measure for assessing text segmentation algorithmshowever a theoretical analysis of the metric finds several problems the metric penalizes false negatives more heavily than false positives overpenalizes near misses and is affected by variation in segment size distributionwe propose a simple modification to the pk metric that remedies these problemsthis new metriccalled windowdiffmoves a fixedsized window across the text and penalizes the algorithm whenever the number of boundaries within the window does not match the true number of boundaries for that window of texttext segmentation is the task of determining the positions at which topics change in a stream of textinterest in automatic text segmentation has blossomed over the last few years with applications ranging from information retrieval to text summarization to story segmentation of video feedsearly work in multiparagraph discourse segmentation examined the problem of subdividing texts into multiparagraph units that represent passages or subtopicsan example drawn from hearst is a 21paragraph science news article called stargazers whose main topic is the existence of life on earth and other planetsits contents can be described as consisting of the following subtopic discussions the texttiling algorithm attempts to recognize these subtopic changes by making use of patterns of lexical cooccurrence and distribution subtopic boundaries are assumed to occur at the point in the documents at which large shifts in vocabulary occurmany others have used this technique or slight variations of it for subtopic segmentation other techniques use clustering andor similarity matrices based on word cooccurrences and still others use machine learning techniques to detect cue words or handselected cue words to detect segment boundaries researchers have explored the use of this kind of document segmentation to improve automated summarization and automated genre detection text segmentation issues are also important for passage retrieval a subproblem of information retrieval more recently a great deal of interest has arisen in using automatic segmentation for the detection of topic and story boundaries in news feeds sometimes segmentation is done at the clause level for the purposes of detecting nuances of dialogue structure or for more sophisticated discourseprocessing purposes some of these algorithms produce hierarchical dialogue segmentations whose evaluation is outside the scope of this discussionthere are two major difficulties associated with evaluating algorithms for text segmentationthe first is that since human judges do not always agree where boundaries should be placed and how fine grained an analysis should be it is difficult to choose a reference segmentation for comparisonsome evaluations circumvent this difficulty by detecting boundaries in sets of concatenated documents where there can be no disagreements about the fact of the matter others have several human judges make ratings to produce a gold standard the second difficulty with evaluating these algorithms is that for different applications of text segmentation different kinds of errors become importantfor instance for information retrieval it can be acceptable for boundaries to be off by a few sentences a condition called a near missbut for news boundary detection accurate placement is crucialfor this reason some researchers prefer not to measure the segmentation algorithm directly but consider its impact on the end application our approach to these two difficulties is to evaluate algorithms on real segmentations using a gold standard and to develop an evaluation algorithm that suits all applications reasonably wellprecision and recall are standard evaluation measures for information retrieval tasks and are often applied to evaluation of text segmentation algorithms as wellprecision is the percentage of boundaries identified by an algorithm that are indeed true boundaries recall is the percentage of true boundaries that are identified by the algorithmhowever precision and recall are problematic for two reasonsthe first is that there is an inherent tradeoff between precision and recall improving one tends to cause the score for the other to declinein the segmentation example positing more boundaries will tend to improve the recall but at the same time reduce the precisionsome evaluators use a weighted combination of the two known as the fmeasure but this is difficult to interpret another approach is to plot a precisionrecall curve showing the scores for precision at different levels of recalltwo hypothetical segmentations of the same reference document segmentationthe boxes indicate sentences or other units of subdivision and spaces between boxes indicate potential boundary locationsalgorithm a0 makes two near misses while algorithm a1 misses both boundaries by a wide margin and introduces three false positivesboth algorithms would receive scores of 0 for both precision and recallanother problem with precision and recall is that they are not sensitive to near missesconsider for example a reference segmentation and the results obtained by two different text segmentation algorithms as depicted in figure 1in both cases the algorithms fail to match any boundary precisely both receive scores of 0 for precision and recallhowever algorithm a0 is close to correct in almost all cases whereas algorithm a1 is entirely off adding extraneous boundaries and missing important boundaries entirelyin some circumstances it would be useful to have an evaluation metric that penalizes a0 less harshly than a1beeferman berger and lafferty introduce a new evaluation metric that attempts to resolve the problems with precision and recall including assigning partial credit to near missesthey justify their metric as follows segmentation is about identifying boundaries between successive units of information in a text corpustwo such units are either related or unrelated by the intent of the document authora natural way to reason about developing a segmentation algorithm is therefore to optimize the likelihood that two such units are correctly labeled as being related or being unrelatedour error metric pµ is simply the probability that two sentences drawn randomly from the corpus are correctly identified as belonging to the same document or not belonging to the same documentthe derivation of pµ is rather involved and a much simpler version is adopted in the later work and by othersthis version referred to as pk is calculated by setting k to half of the average true segment size and then computing penalties via a moving window of length k at each location the algorithm determines whether the two ends of the probe are in the same or different segments in the reference segmentation and increases a counter if the algorithms segmentation disagreesthe resulting count is scaled between 0 and 1 by dividing by the number of measurements takenan algorithm that assigns all boundaries correctly receives a score of 0beeferman berger and lafferty state as part of an illustration of how the pk metric handles false negativesthe arrowed lines indicate the two poles of the probe as it moves from left to right the boxes indicate sentences or other units of subdivision and the width of the window is four meaning four potential boundaries fall between the two ends of the probesolid lines indicate no penalty is assigned dashed lines indicate a penalty is assignedtotal penalty is always k for false negatives the justification for this metric that to discourage cheating of the metric degenerate algorithmsthose that place boundaries at every position or place no boundaries at allare assigned the same scoreadditionally the authors define a false negative as a case when a boundary is present in the reference segmentation but missing in the algorithms hypothesized segmentation and a false positive as an assignment of a boundary that does not exist in the reference segmentationthe pk metric is fast becoming the standard among researchers working in text segmentation however we have reservations about this metricwe claim that the fundamental premise behind it is flawed additionally it has several significant drawbacks which we identify in this sectionin the remainder of the paper we suggest modifications to resolve these problems and we report the results of simulations that validate the analysis and suggest that the modified metric is an improvement over the originalassume a text with segments of average size 2k where k is the distance between the two ends of the pk probeif the algorithm misses a boundaryproduces a false negativeit receives k penaltiesto see why suppose s1 and s2 are two segments of length 2k and the algorithm misses the transition from s1 to s2when pk sweeps across s1 if both ends of the probe point to sentences that are inside s1 the two sentences are in the same segment in both the reference and the hypothesis and no penalty is incurredwhen the right end of the probe crosses the reference boundary between s1 and s2 it will start recording nonmatches since the algorithm assigns the two sentences to the same segment while the reference does notthis circumstance happens k times until both ends of the probe point to sentences that are inside s2this analysis assumes average size segments variation in segment size is discussed below but does not have a large effect on this resultan illustration of how the pk metric handles false positivesnotation is as in figure 2total penalty depends on the distance between the false positive and the relevant correct boundaries on average it is k2 assuming a uniform distribution of boundaries across the documentthis example shows the consequences of two different locations of false positives on the left the penalty is k2 on the right it is k now consider false positivesa false positive occurs when the algorithm places a boundary at some position where there is no boundary in the reference segmentationthe number of times that this false positive is noted by pk depends on where exactly inside s2 the false positive occursif it occurs in the middle of the segment the false positive is noted k times if it occurs j 2k that is as long as each segment is about half the average size or largerthe penalty will then decrease linearly with sizesize so long as k aiin order for this to be true both the segment to the left and the segment to the right of the missed boundary have to be of size greater than k otherwise the penalty can only be equal to the size of the smaller segmentwhen sizesize k from a boundary is k thus for larger segments the average penalty assuming a uniform distribution becomes larger because there are more places in the segment that are at least k positions away from a boundarythe behavior at the edges of the segments remains the same though so the average penalty never reaches k now consider what happens with smaller segmentssuppose we have a false positive in segment aas size decreases from 2k to k the average false positive penalty decreases linearly with it because when size decreases below 2k the maximum distance any sentence can be from a boundary becomes less than k therefore the a reference segmentation and five different hypothesized segmentations with different properties maximum possible penalty for a false positive in a is less than k and this number continues to decrease as size decreaseswhen size k as long as the false positive is a distance 2k from the actual boundary between the first and second reference segmentsthe penalty is large because the metric catches both the false negative and the false positive errorsthe segmentations assigned by algorithms a0 and a2 are treated as discussed earlier in conjunction with problem 1 the one assigned by algorithm a0 has a false negative and thus incurs a penalty of k and the one assigned by algorithm a2 has a false positive and thus incurs a penalty of 0more formally where b represents the number of boundaries between positions i and j in the text and n represents the number of sentences in the textthis approach clearly eliminates the asymmetry between the false positive and false negative penalties seen in the pk metricit also catches false positives and false negatives within segments of length less than k to understand the behavior of windowdiff with respect to the other problems consider again the examples in figure 5this metric penalizes algorithm a4 the most assigning it a penalty of about 2kalgorithms a0 a1 and a2 receive the same penalty and algorithm a3 receives the smallest penalty thus although it makes the mistake of penalizing algorithm a1 as much as algorithms a0 and a2 it correctly recognizes that the error made by algorithm a3 is a near miss and assigns it a smaller penalty than algorithm a1 or any of the otherswe argue that this kind of error is less detrimental than the errors made by pkwindowdiff successfully distinguishes the nearmiss error as a separate kind of error and penalizes it a different amount something that pk is unable to dowe explored a weighted version of windowdiff in which the penalty is weighted by the difference ri aihowever the results of the simulations were nearly identical with those of the nonweighted version of this metric so we do not consider the weighted version furtherthis section describes a set of simulations that verify the theoretical analysis of the pk metric presented aboveit also reports the results of simulating two alternatives including the proposed solution just describedfor the simulation runs described below three metrics were implemented in these studies a single trial consists of generating a reference segmentation of 1000 segments with some distribution generating different experimental segmentations of a specific type 100 times computing the metric based on the comparison of the reference and experimental segmentations and averaging the 100 resultsfor example we might generate a reference segmentation r then generate 100 experimental segmentations that have false negatives with probability 05 and then compute the average of their pk penaltieswe carried out 10 such trials for each experiment and averaged the average penalties over these trialsthe first set of tests was designed to test the metrics performance on texts with different segment size distributions we generated four sets of reference segmentations with segment size uniformly distributed between two numbersnote that the units of segmentation are deliberately left unspecifiedso a segment of size 25 can refer to 25 words clauses or sentenceswhichever is applicable to the task under considerationalso note that the same tests were run using larger segment sizes than those reported here with the results remaining nearly identicalfor these tests the mean segment size was held constant at 25 for each set of reference segments in order to produce distributions of segment size with the same means but different variancesthe four ranges of segment sizes were and the results of these tests are shown in table 1the tests used the following types of experimental segmentations the results indicate that variation in segment size does make a difference but not a very big onethe pk value for the range with fn segmentation is on average 0245 and it decreases to 0223 for the rangesimilarly the fp segmentation decreases from 0128 for the range to 0107 for the range and the fnp segmentation decreases from 0317 for the range to 0268 for the rangethus variation in segment size has an effect on pk as predictednote that for false negatives the pk value for the range is not much different than for the rangethis is expected since there are no segments of size less than k in these conditionsfor the range the pk value is slightly smaller and for the range it is smaller stillthese results are to be expected since more segments in these ranges will be of length less than k for the fp segmentations on the other hand the decrease in pk value is more pronounced falling from 0128 to 0107 as the segment size range changes from to this is also consistent with our earlier analysis of the behavior of the metric on false positives as segment size decreasesnotice that the difference in pk values between and is slightly larger than the other two differencesthis happens because for segment sizes k the false positive penalty disappears completelythe results for the fnp segmentation are consistent with what one would expect of a mix of the fn and fp segmentationsseveral other observations can be made from table 1we can begin to make some judgments about how the metric performs on algorithms prone to different kinds of errorsfirst pk penalizes false negatives about twice as much as false positives as predicted by our analysisthe experimental segmentations in table 1a contain on average 500 false negatives while the ones in table 1b contain on average 500 false positives but the penalty for the table 1b segmentations is consistently about half that for those in table 1athus algorithms prone to false positives are penalized less harshly than those prone to false negativesthe table also shows the performance of the two other metricspk simply doubles the false positive penalty while wd counts and compares the number of boundaries between the two ends of the probe as described earlierboth pk and wd appear to solve the problem of underpenalizing false positives but wd has the added benefit of being more stable across variations in segment size distributionthus wd essentially solves problems 1 2 and 3table 1c shows that for the fnp segmentation there is a disparity between the performances of pk and wdit appears that pk is harsher in this situationfrom the above discussion we know that wd is more lenient in situations where a false negative and a false positive occur near each other than pk ishowever pk is more lenient for pure false positives that occur close to boundariesthus it is not immediately clear why pk is harsher in this situation but a more detailed look provides the answerlet us begin the analysis by trying to explain why pk scores for the fnp segmentation make sensethe fnp segmentation places both false negatives and false positives with probability 05since we are working with reference segmentations of 1000 segments this means 500 missed boundaries and 500 incorrect boundariessince the probabilities are uniformly distributed across all segments and all boundaries on average one would expect the following distribution of errors a type a error is a standard false positive so the average penalty is k2a type b error is a standard false negative so the average penalty is k it remains to figure out what the average penalty is for a type c errormodeling the behavior of the metric a type c error occurrence in which a false positive and a false negative are some distance e k from each other incurs a penalty of 2e where e is assigned for the false positive and another e is assigned for the false negativethis may range from 0 to 2k and since error distribution is uniform the penalty is k on averagethe same as for a regular false negativeto translate this into actual values we assume the metric is linear with respect to the number of errors thus if pk outputs a penalty of p for 500 false negatives it would have a penalty of p2 for 250 false negativeslet a be the penalty for 500 type a errors b the penalty for 500 type b errors and c the penalty for 500 type c errors then the penalty for the fnp segmentation is p a2 b2 c2assuming the metric is linear we know that c b 2a we can thus substitute either b or 2a for c we choose to substitute 2a because pk is strongly affected by segment size variation for type a and type c errors but not for type b errorsthus replacing c with 2a is more accurateperforming the substitution we have p 3 a2 b2we have a and b from the fp and fn data respectively so we can compute p the results arranged by segment size variation are as follows as can easily be seen the estimate produced using this method is very similar to the actual pk valuethe same sort of analysis applies for pk and wdin pk type a errors are penalized k on average since the false positive penalty is doubledtype b errors have an average penalty of k as for pktype c errors have an average penalty of 3e where 2e is assigned for the false positive and e is assigned for the false negativethis means that the average penalty for a type c error is 3 k2since we know that c 15a by the linear metric assumption we have p a2 b2 15 a2 5 a4 b2 the results arranged by segment size variation are as follows estimate 0443 0429 0401 0378 actual 0446 0432 0403 0375 finally wd incurs an average penalty of k for both type a and type b errorsfor type c errors the penalty is 2e so it is also k on averagethus we get p a2 2b a2 a b2the results arranged by segment size variation are as follows estimate 0363 0364 0359 0355 actual 0376 0370 0357 0343 these estimates do not correspond to the actual results quite as closely as the estimates for pk and pk did but they are still very closeone reason why these estimates are a little less accurate is that for wd type c errors are more affected by variation in segment size than either type a or type b errorsthis is clear from the fact that the decrease is greater in the actual data than in the estimatetable 2 shows data similar to those of table 1 but using two different probability values for error occurrence 005 and 025these results have the same tendencies as those shown above for p 05the second set of tests was designed to assess the performance of the metrics on algorithms prone to different kinds of errorsthis would determine whether the metrics are consistent in applying penalties or whether they favor certain kinds of errors over othersfor these trials we generated the reference segmentation using a uniform distribution of segment sizes in the rangewe picked this range because it has reasonably high segment size variation but segment size does not dip below k for the average error score for pk pk and wd over 10 trials of 100 measurements each shown by segment size distribution range false negatives were placed with probability 005 at each boundary false positives were placed with probability 005 uniformly distributed within each segment and both false negatives and false positives were placed with probability 005 false negatives were placed with probability 025 at each boundary false positives were placed with probability 025 uniformly distributed within each segment and both false negatives and false positives were placed with probability 025 reasons described above this means the results will not be skewed by the sensitivity of pk and pk to segment size variationsthe tests analyzed below were performed using the high error occurrence probabilities of 05 but similar results were obtained using probabilities of 025 and 005 as wellthe following error distributions were used1 occurring at each point with probability p number of segments corresponds to a 05 probability value for each individual segment the results are shown in table 3pk penalizes fp2 less than fp1 and fp3 and fnp2 less than fnp1 and fnp3this result is as expectedfp2 and fnp2 have false positives normally distributed around each boundary which means that more of the false positives are close to the boundaries and thus are penalized lessif we made the standard deviation smaller we would expect this difference to be even more apparentpk penalized fp2 and fnp2 the least in their respective categories and fp1 and fnp1 the most with fp3 and fnp3 falling in betweenthese results are as expected for the same reasons as for pkthe difference in the penalty for fp1 and fp3 for both pk and pk but especially apparent for pkis interestingin fpfnp1 false positive probability is uniformly distributed throughout each segment whereas in fpfnp3 false positive probability is uniformly distributed throughout the entire documentthus the fpfnp3 segmentations are more likely to have boundaries that are very close to each other since they are not segment dependent while fpfnp1 are limited to at most one false positive per segmentthis results in pk assigning smaller penalties for fpfnp3 since groups of false positives close together would be underpenalizedthis difference is also present in the pk results but is about half for obvious reasonswd penalized fp1 the most and fp3 the least among the fp segmentationsamong the fnp segmentations fnp1 was penalized the most and fnp2 the leastto see why we examine the results for the fp segmentationswd penalizes pure false positives the same amount regardless of how close they are to a boundary the only way false positives are underpenalized is if they occur in bunchesas mentioned earlier this is most likely to happen in fp3it is least likely to happen in fp1 since in fp1 there is a maximum of one false positive per segment and this false positive is not necessarily close to a boundaryin fp2 false positives are also limited to one per segment but they are also more likely to be close to boundariesthis increases the likelihood that 2 false positives will be within k sentences of each other and thus makes wd give a slightly lower score to the fp2 segmentation than to the fp1 segmentationnow let us look at the fnp segmentationsfnp3 is penalized less than fnp1 for the same reason described above and fnp2 is penalized even less than fnp3the closer a type c error is to the boundary the lower the penaltyfnp2 has more errors distributed near the boundaries than the others thus the fnp2 segmentation is penalized less than either fnp1 or fnp3the same tests were run for different error occurrence probabilities achieving results similar to those for p 05 just describedthere is a slight difference for the case of p 005 because the error probability is too small for some of the trends to manifest themselvesin particular the differences in the way wd treats the different segmentations disappear when the error probability is this smallwe also performed a small set of tests to verify the theoretical finding that pk and pk overpenalize nearmiss errors as compared with pure false positives and that wd does the opposite overpenalizing the pure false positivesspace limitations prevent detailed reporting of these results but the simulations did indeed verify these expectationswe have found that the pk error metric for text segmentation algorithms is affected by the variation of segment size distribution becoming slightly more lenient as the variance increasesit penalizes false positives significantly less than false negatives particularly if the false positives are uniformly distributed throughout the documentit penalizes nearmiss errors more than pure false positives of equal magnitudefinally it fails to take into account situations in which multiple boundaries occur between the two sides of the probe and it often misses or underpenalizes mistakes in small segmentswe proposed two modifications to tackle these problemsthe first which we call pk simply doubles the false positive penaltythis solves the problem of overpenalizing false negatives but it is not effective at dealing with the other problemsthe second which we call windowdiff counts the number of boundaries between the two ends of a fixedlength probe and compares this number with the number of boundaries found in the same window of text for the reference segmentationthis modification addresses all of the problems listed abovewd is only slightly affected by variation of segment size distribution gives equal weight to the false positive penalty and the false negative penalty is able to catch mistakes in small segments just as well as mistakes in large segments and penalizes nearmiss errors less than pure false positives of equal magnitudehowever it has some problems of its ownwd penalizes all pure false positives the same amount regardless of how close they are to an actual boundaryit is not clear whether this is a good thing or not but it seems to be preferable to overpenalizing near missesthe discussion above addresses problems 1 through 4 but does not address problem 5 how does one interpret the values produced by the metricfrom the tests we have run it appears that the wd metric grows in a roughly linear fashion with the difference between the reference and the experimental segmentationsin addition we feel that wd is a more meaningful metric than pkcomparing two stretches of text to see how many discrepancies occur between the reference and the algorithms result seems more intuitive than determining how often two text units are incorrectly labeled as being in different segmentsthis work was completed while the second author was a visiting professor at harvard universityboth authors thank barbara grosz and stuart shieber without whom this work would not have happened and freddy choi for some helpful explanationsthey would also like to thank the anonymous reviewers for their valuable commentspartial support for the research reported in this paper was provided by national science foundation grants iri9618848 and cda9401024
J02-1002
a critique and improvement of an evaluation metric for text segmentationthe pk evaluation metric initially proposed by beeferman berger and lafferty is becoming the standard measure for assessing text segmentation algorithmshowever a theoretical analysis of the metric finds several problems the metric penalizes false negatives more heavily than false positives overpenalizes near misses and is affected by variation in segment size distributionwe propose a simple modification to the pk metric that remedies these problemsthis new metriccalled windowdiff moves a fixedsized window across the text and penalizes the algorithm whenever the number of boundaries within the window does not match the true number of boundaries for that window of textas a measure for segmentation quality we develop windowdiff which only evaluates segment boundaries not the labels assigned to them
generating referring expressions boolean extensions of the incremental algorithm this paper brings a logical perspective to the generation of referring expressions addressing the incompleteness of existing algorithms in this area after studying references to individual objects we discuss references to sets including boolean descriptions that make use of negated and disjoined properties to guarantee that a distinguishing description is generated whenever such descriptions exist the paper proposes generalizations and extensions of the incremental this paper brings a logical perspective to the generation of referring expressions addressing the incompleteness of existing algorithms in this areaafter studying references to individual objects we discuss references to sets including boolean descriptions that make use of negated and disjoined propertiesto guarantee that a distinguishing description is generated whenever such descriptions exist the paper proposes generalizations and extensions of the incremental algorithm of dale and reiter generation of referring expressions is a key task of most natural language generation systems regardless of the type of knowledge base forming the input to the generator many objects will not be designated in it via an ordinary proper namea person like mr jones for example may be designated using an artificial name like jones083 if the name jones is not uniquely distinguishingthe same is true for a piece of furniture a tree or an atomic particle for instance for which no proper name is in common use at all or if the generator tries to refer to an entire set of objectsin all such cases the generator has to invent a description that enables the hearer to identify the intended referentin the case of mr jones for example the program could identify him by providing his full name and address in the case of a tree some longer description may be necessaryhenceforth we will call the intended referent the target of the gre algorithmthe question that we set out to answer is whether existing gre algorithms produce adequate descriptions whenever such descriptions exist in short whether these algorithms are as we shall say completethe paper brings a degree of formal precision to this issue and reveals a number of reasons why current gre algorithms are incomplete we sketch remedies and discuss their consequences in terms of linguistic coverage and computational tractabilitywe take the incremental algorithm to represent the state of the art in this area and we minimize the deviations from this algorithmas a result this paper might be read as an investigation into how widely the ideas underlying the incremental algorithm can be used and the extent to which they may be generalizedthe main generalization that we will investigate involves complex boolean combinations of properties that is descriptions that involve more than a merely intersective combination of propertiessuch generalizations are natural because the properties involved are implicitly present in the kb as we will explain they become especially relevant when the algorithms are also generalized to generate references to sets rather than individual objectsbut before we arrive at these generalizations we will identify and confront a number of cases in which current gre algorithms are incomplete even with respect to merely intersective descriptionsin this paper we will deal with first mention descriptions only assuming that the information used for generating the description is limited to a kb containing complete information about which properties are true of each objectalso we focus on one shot descriptions disregarding cases where an object is described through its relations with other objects more crucially we follow dale and reiter in focusing on the semantic content of a description assuming that any combination of properties can be expressed by the nlg module responsible for linguistic realizationthis modular approach allows us to separate logical aspects of generation from purely linguistic aspects and it allows the realization module to base its decisions on complete information about which combination of properties is to be realizedaccordingly when we write generation of referring expressions or gre we will refer specifically to determination of the semantic content of a descriptionanalogously the word description will refer to the semantic content of a linguistic expression onlynote that our modular approach makes it unnatural to assume that a description is always expressed by a single noun phrase if several sentences are needed then so be itafter summarizing the incremental algorithm in section 2 in section 3 we take a closer look at the algorithm in its standard intersective form in which it identifies an object by intersecting a number of atomic propertieswe discuss cases in which this algorithm fails to find an adequate description even though such a description exists and we propose a number of possible remedieshaving extablished a completeness result for a version of the intersective incremental algorithm we turn to questions of completeness that involve more complex boolean combinations in section 4in section 5 we summarize the main results of our exploration and put them in perspectivethe incremental algorithm of dale and reiter singles out a target object from among some larger domain of entitiesit does this by logically conjoining a number of properties found in a part of the kb that represents information shared between speaker and hearerthe authors observed that the problem of finding a description that contains the minimum number of properties is computationally intractable they combined this with the known fact that speakers often produce nonminimal descriptions anyway accordingly they proposed an algorithm that only approximates full brevity while being of only linear complexityour summary of the algorithm glosses over many details yet still allows us to discuss completenessin particular we disregard any special provisions that might be made for the selection of head nouns because arguably this has to involve realizational issues1 the incremental algorithm produces a set l of properties p1 pn such that their logical conjunction forms a distinguishing description of the target object r in other words writing q for the extension of q the intersection p1 n n pn must equal the singleton set rit is a hillclimbing algorithm which finds better and better approximations of the target set r by accumulating more and more propertieshence the term incrementalthere is no backtrackingconsequently if some property pi in l is made redundant by later additions c pi then pi is retained as a member of l neverthelessin the full algorithm properties are analyzed as pairs consisting of an attribute and a valueattributes are ordered in a list aif ai precedes aj in a then ai is more preferred than aj as a consequence ai will be considered before aj by the algorithmsuppose r is the target object and d is the set of elements from which r is to be selectedthe algorithm iterates through a for each attribute ai it checks whether specifying a value for that attribute would rule out at least one object that has not already been ruled out if so the attribute is added to l with a suitable value c is the set of confusables at any given stage of the algorithm2 objects that are ruled out are removed from c the process of expanding l and contracting c continues until c r if and when this condition is met l is a distinguishing set of propertiesfor easy generalizability the algorithm will be cast in settheoretic termswe first present a version that focuses on properties without separating these into attributes and values and assume the properties themselves are ordered in a list p this version of the algorithm will be called drprop or dr when there is no risk of confusionwe assume that the domain contains one or more objects other than the target object the socalled distractors thus r e d but are d return failure all properties in p have been tested and still c r assuming that the tests in the body of the loop take some constant amount of time the worstcase running time is on the order of na where na is the total number of propertiesso the algorithm has only linear complexitya slightly closer approximation of full brevity can be achieved if attributes and values are separated allowing the algorithm to choose the best value for each attributegiven an attribute findbestvalue selects the value that removes most distractors while still including the target r if no value includes r the function returns nilin case of a tie findbestvalue chooses the least specific of the contestantsfor example when dog rules out as many distractors as chihuahua chihuahua cannot be chosena is the list of attributes l is the set of attributevalue combinations returned by the algorithma further notational convention will be useful values will be identified by two indices the first of which identifies the attributethus to denote value j of attribute ai we write vijthis version of the algorithm will be called drattthe initializations of l and d are omitted for brevitywe will switch back and forth between dr and dratt depending on what is at stakelike dr dratt has linear complexitythis can be made precise in the following way3 if the running time of a call of findbestvalue is a constant times the number of values of the attribute ai then the worstcase running time of dratt is o where na equals the number of attributes in the language and nv the average number of values of all attributessome new definitions will be usefula gre algorithm is successful with respect to a given situation if it produces a distinguishing description of r in that situationwe will call an algorithm complete if it is successful in every situation in which a distinguishing description existssuccess is not always possible the properties in the kb may not be sufficient for individuating a given objectsuch nowin situations will not be held against an algorithmthe incremental algorithm generates descriptions that contain set intersection as their only boolean operationwe define a gre algorithm to be intersectively complete if it has the following property whenever an object can be characterized by intersecting a finite number of properties the algorithm will find such an intersectionwe would like to prove the incremental algorithm to be intersectively complete but we will meet a few obstacles before we get thereone assumption without which the incremental algorithm cannot be proven to be intersectively complete concerns the semantic relation between different values of a given attribute their extensions should not overlap in the following precise sense van deemter generating referring expressions values can overlap for different reasonssome attributes have vague values which may be modeled as overlapping some objects may count as both red and orangealso values may derive from particular parts or aspects of an object for example if an object counts as metal because it has some metal parts then it may be listed as both metal and plasticfurther examples arise if the kb models relations through unanalyzed propertiesfor example a desk or a particular type of desk can stand in a given relation to more than one other companyto see the problems arising from overlapping values consider a kb that models which customer bought which types of desks and where c a b c d e fsuppose a is the target while the attribute boughtby is more preferred than colorthe value philips is chosen first reducing the initial set c to a b enow the algorithm is doomed to end in failure since the different values of color are unable to remove the unwanted b without also sacrificing anone of this can be corrected since the algorithm does not use backtrackingnote that a uniquely identifying description of a would have been possible if only sony had been chosen instead of philips leading to a description like the brown desk bought by sonythe algorithm does not just fail it fails in a situation where success was perfectly achievablehow can this limitation be remediedone might introduce a limited kind of backtracking which remembers where the algorithm has encountered overlapping values and when it results in failure goes back to the lastencountered situation where it has made a choice between overlapping values if this does not lead to success the algorithm backtracks to the previous choice situation and so on until no more choice situations are left or a distinguishing description has been reached unfortunately this algorithm becomes intractable if values overlap too often in the worst case we are back to having to check all combinations of propertiesa simpler and computationally more efficient algorithm would include all overlapping values that are true of the target while also removing some distractorsthis could be done as follows whenever a value vij of an attribute ai is selected for inclusion in l search for other values of the same attribute that have the target r as an element if such a value vik is found check whether it stands in the subset relation to vij if not then include vik as well next search for yet another value vil of the same attribute that has r as an element and include vil if it does not stand in the subset relation to vij or vik and so on until no other values of ai exist that have r as an element then move on to the next attributethis algorithm has a worstcase running time of o4 in our example this algorithm would produce a set consisting of the properties bought by sony and bought by philips which can be realized as the desk bought by sony and by philips if we change the example by letting philips buy c as well as a the algorithm will go on to select the property brown resulting in a set of properties that may be realized as the brown desk bought by sony and by philipssuch descriptions appear to be quite naturalone might even argue on gricean grounds that identifying a simply as being bought by philips can give rise to the false implicature that a was not bought by sonythis suggests that the proposed algorithm might also be empirically more accurate than the one using limited backtracking provided of course properties are properly aggregated to prove intersective completeness certain assumptions concerning the cardinality of sets need to be madeto give an extreme example suppose one wanted to refer to a real number that does not have a proper name then the class of potentially useful properties is so vast that no gre algorithm can take them all into considerationas long as the number of properties is denumerably infinite only termination becomes problematic if a uniquely referring description p1 n n pn exists then the algorithm will find one in finite time since each of the n properties in the description will be found in finite time if no distinguishing description exists however the algorithm never terminatesin the less likely case where the set of properties is nondenumerably infinite completeness becomes problematic as well since it is impossible for the algorithm to consider all properties hence successful combinations may be overlooked infinity of the set of distractors results in a different problemthe key question is whether there exists an effective procedure for removing distractors if no such procedure exists the incremental algorithm can only be applied after a property has been found that cuts down the set of distractors to a manageable sizeto be on the safe side when we prove completeness we will assume that the set of properties is at most denumerably infinite while the set of distractors is finitethese assumptions are harmless in connection with present nlg systems all of which work with relatively small setsit is unclear how human speakers cope with large sets of properties andor distractors but this question goes beyond our present concernsbased on these considerations we prove intersective completeness under some assumptions concerning infinity and overlapping valueswe deal first with dr then with the more complex dratttheorem 1 completeness of dr suppose there are at most denumerably many properties and finitely many distractorsthen if an object can be individuated by intersecting a finite number of properties dr will find such an intersectionproof suppose q1 n n qm r where the properties q1qm occur in p in the order indicated by the subscriptsnow either dr returns success before it has inspected all of q1 qm or it reaches the point where all of q1 qm have been inspectedthis does not mean that all of q1 qm have necessarily been included in l since other properties in p may have been selected that because some of q1 qm not to remove any distractorsyet when all of q1qm have been inspected success must have been achievedto see this let desi be the description that results after processing qithen a proof by induction over van deemter generating referring expressions i shows that desi c q1 n n qi for all i m it follows that desm c q1 n n qm rbut r e desm so desm rtheorem 2 completeness of dratt assume attributes have no overlapping values and there are at most denumerably many attributes and values and finitely many distractorsthen if an object can be individuated by intersecting a finite number of properties dratt will find such an intersectionproof given assumption if dr is complete then so is drattto see this let bv abbreviate findbestvaluesuppose there is a value vij of attribute ai that leads to a distinguishing description whereas bv does notthen a contradiction is derived as followsfor certain via1 vian so either r e bv or there exists x r for which x e via1 n n vian n bv but case contradicts the definition of findbestvalue case on the other hand implies that hence x e bv while x e vijbut r e bv n vij so bv and vij are not disjointconsequently by assumption case implies that via1 n n vian n vij is a real subset of via1 n n vian n bv contradicting the fact that findbestvalue prefers a more general value over a more specific one only if it removes the same distractorsboth versions of the incremental algorithm have been proven to be intersectively completenow we widen the issue to include all other boolean combinations involving negation and disjunction 5 this is natural since properties expressed by boolean combinations are implicit in the kb if the kb lists the property poodle and the property alsatian then it implicitly contains the property of being either a poodle or an alsatianthis move will however only have its full impact when we also widen the issue to reference to sets of objectsin the new setting it will be useful to generalize our earlier notion of intersective completeness calling a gre algorithm boolean complete iff it finds a boolean description of a set whenever one can be given on the basis of the properties in the kbgenerating descriptions is even more important if the target is a set than if it is a single object even if the objects in the set have proper names the set as a whole may lack a name yet reference to sets has long been disregarded in nlgin this section we sketch generalizations of dr that produce descriptions of setsto begin with the algorithm drplural finds intersections p1 n npn of atomic properties p1 pn whose extension equals a given target set s since s may or may not be a singleton drplural subsumes dras before we assume a nonempty set of distractors that is s c d but s d6 return failure all properties in p have been tested yet c s note that s takes the place of the target object r in the earlier algorithms the process of expanding l and contracting c continues until c s because this is basically the same algorithm as dr it has the same computational complexity of o where na is the cardinality of p drplural characterizes a set by scrutinizing its elementsthis does not work for properties like being of the same age which crucially pertain to sets of objects the algorithm can however be generalized to cover such cases if we initialize c not to d but to the powerset of d after which the algorithm selects properties of sets removing from p all those sets for which the property is falsefor example selection of being of the same age removes all those sets whose elements are not of the same age as each other selection of forming a football team removes all sets that do not make up a football team and so onas a result the algorithm generates descriptions of sets of collective entities in this way descriptions such as those teams all of whose members are of the same age can be generatedin this collective version of drplural the target s is a set of sets p is a list of properties of sets so if pi e p then pill is also a set of setsas in the case of distributive properties describing one entity is a special case of describing a set of entitiesonce again these adaptations leave the algorithm structurally unchanged sets replace objects throughoutyet they because the complexity of the algorithm to become exponential since testing whether c s involves inspecting all elements of c of which there can be up to 2nd this algorithm can also be applied to distributive properties if these are upgraded to the level of sets let a newfangled distributive property be true of a set iff the property is true of all its elements this requires that the target s is always cast as a set of sets even if it is viewed distributivelyfor example if a set of playerssay a b and care to be characterized as a collection then s a b c if they are to be characterized distributively then s a b c a b a c b c a b cin this way the algorithm is able to combine collective and distributive properties as in those football teams whose members are britishwe will not explore collective versions of the incremental algorithm further here focusing instead on the relatively simple case of drplural in which all properties are distributiveas in the case of dr it is easy to separate attributes and values when referring to sets allowing a closer approximation of full brevity the resulting algorithm drpluralatt is to drplural as dratt is to dr overlapping values can be treated as described in section 31in what follows we will once again take propertyoriented versions of the incremental algorithm as our starting point but implications for the separation between attributes and values will be mentioned where they are nontrivialnow that we are able to generate references to sets let us move away from purely intersective descriptions on to full boolean combinations of propertiesconsider a kb whose domain is a set of animals and whose only attributes are type and color type dog poodle color black white in this situation the incremental algorithm does not allow us to individuate any of the animalsintuitively however the kb should enable one to refer to c for example since it is the only black dog that is not a poodle c black n poodle a similar gap exists where disjunctions might be usedfor example the incremental algorithm does not make the set of dogs that are either white or poodles referrable whereas it is referrable in englishfor example the white dogs and the poodlesin the next two sections we will investigate how negation and disjunction can be taken into account in grebut first we introduce a trick for determining whether unique identification of an entity is possible in a given situation7 the idea is to calculate for each element d in the domain the satellite set of d that is the intersection of the extensions of all the properties true of d taking all extensions from our dog example we have satellite sets show which sets can be uniquely identified and which ones cannotin the case of the dogs for example no intersective description of c is possible because in the satellite sets c is always accompanied by other objects more generally in this example no object in the domain is uniquely identifiable since no object occurs in a satellite set that is a singletonsatellite sets can also be applied to the construction of descriptionsthe entity a b for example is uniquely described by the intersection dog n poodle n black and this can be read off the list of satellite setstwo of the three properties in dog n poodle n black are redundant howeverusing satellites sets for the construction of descriptions can be particularly useful when properly generalized to boolean descriptions but shortening the resulting descriptions in a computationally efficient way is difficult the present paper will focus on another approach to boolean descriptions which takes the incremental algorithm as its point of departure in this section we will show how full boolean descriptions can be generatedthis can be done in many different ways depending among other things on what form of descriptions are preferred for example disjunctions of conjunctions or conjunctions of disjunctionswe will aim for the latter while staying as close as possible to the incremental algorithmthe algorithm proceeds as followsfirst we add negations to the list of atomic propertiesthen drplural runs a number of times first in phase 1 the algorithm is performed using all positive and negative literals if this algorithm ends before c s phase 2 is entered in which further distractors are removed from c by making use of negations of intersections of two literals and so on until either c s or all combinations have been tried observe that the negation of an intersection comes down to set union because of de morgans law p1 n n pn p1 you you pnthus phase 2 of the algorithm deals with disjunctions of length 2 phase 3 deals with disjunctions of length 3 and so onoptimizations may be applied to shorten the resulting descriptionsfor instance a description of the form n can be simplified to using standard algorithms such optimizations however are less urgent than in the case of the more verbose descriptions generated using satellite sets and we will disregard optimizations herea schematic presentation may be useful in which p1_ stands for any literal that is any atomic property or its negationthe length of a property will equal the number of literals van deemter generating referring expressions occurring in itwe will say that a drplural phase uses a set of properties x if it loops through the properties in x drbe phase 1perform drplural using all properties of the form p_if this is successful then stop otherwise go to phase 2phase 2based on the values of l and c coming out of phase 1 perform drplural using all properties of the form p_ you p_if this is successful then stop otherwise go to phase 3phase 3based on the values of l and c coming out of phase 2 perform drplural using all properties of the form p_ you p_ you p_if this is successful then stop otherwise go to phase 4etcone can require without loss of generality that no property considered at any phase may have different occurrences of the same atomtherefore since at phase n there is room for properties of length n the maximal number of phases equals the total number of atomic propertiesconsider our old example where the preference order of atomic properties corresponds with the order in which they are listed and where the same order extends to their negations all of which are less preferredabbreviating b black d dog p poodle and w white we have p now if s c d e and s c are to be characterized nothing eventful happensin both cases a description is found during phase 1 p in the first case b n p in the secondthe situation gets more interesting if s a b d e which triggers phase 2for instance if positive literals precede negative literals the properties relevant for phase 2 might be ordered as follows during phase 1 no property is selected since the only property true of all elements in s a b d e is d which fails to remove any distractorsduring phase 2 one property after another is rejectedfor example the property bud is rejected because it does not remove any distractorsthe first property that is true of all elements of s while also removing distractors is p you w this property removes all distractors at once causing the algorithm to end with l poodle you white as the complete descriptionif we modify the example by letting black a c and s b c d e then the description l black you poodle is founddrboolean is incremental not only within a phase but also from one phase to the next which causes shorter disjunctions to be favored over longer onesonce a property has been selected it will not be abandoned even if properties selected during later phases make it logically superfluousas a result one may generate descriptions like x n in a situation where y you z would have sufficed c_ xthis is not unlike some of the redundancies generated by dale and reiters algorithm and as in their case it is unclear whether this is descriptively adequateadaptations can be made if neededfor instance phases might run separately before running in combination first phase 1 then 2 then 12 then 3 then 13 then 23 then 123 and so on8 as a result of this adaptation the description y you z would be generated because of phase 2 alonedouble incrementality however does not save drboolean from intractabilityto estimate running time as a function of the number of properties in the kb and those in the description we can mirror an argument in dale and reiter to show that the maximal number of properties to be considered equals if nl na then this is on the order of nnl a to avoid intractability the algorithm can be prunedno matter where this is done the result is a polynomial algorithmby cutting off after phase 1 for example only atomic properties are combined producing such descriptions as the black dog that is not a poodle disregarding more complex descriptions as a result completeness is lost but only for references to nonsingleton sets because set union does not add descriptive power where the description of singletons is concernedthe number of properties to be considered by this simpler algorithm equals 2 2na 1to produce descriptions like white n as well the algorithm can be cut off one phase later leading to a worstcase running time of o and so on for more and more complex descriptionsdrboolea can of course be modified to take advantage of the distinction between attributes and valuessuppose for example that v1 you you vn takes precedence over w1 you you wn whenever there are more negative values among v1 vn than among w1 wnthen the preference ordering between attributes may be taken into account if the number of negative values is the same in both unions in case of a tie the number of distractors removed by each of the two unions may decide if all this fails to tip the balance the relative specificity of attributes may be usedthe situation resembles that of dratt but in the case of the new algorithm drbooleanatt there is more scope for choice because it compares combinations of properties when the preference order of individual attributes has been decided it can happen that vi is more preferred than wj while wk is more preferred than vl in which case it is unclear whether v1 you you vn should be more preferred or w1 you you wn and the predicate p then the degrees of preference of both r and p are relevant and it is unclear which of the two is more importantonce drbooleanatt is constructed along these lines the question of overlapping values arises in exactly the same way as in the case of dratt and drpluralattthe problem arises if components of different unions overlap as when the algorithm compares vijuvkl and vijuvkl where vkl and vkl overlap in the sense of section 31 as in the case of dratt simply choosing the option that removes the most distractors may cause the algorithm to become incompletethis problem can be overcome as before using either limited backtracking or inclusion of all relevant options instead of exploring drbooleanatt any further we will return to its predecessor drboolean to prove that it is powerful enough to do its jobin section 33 we proved intersective completeness for two versions of dale and reiters incremental algorithm dr and drattwe now prove boolean completeness for drboolean the boolean extension of drpluraltheorem 3 completeness of drbookassume there are at most denumerably many properties and finitely many distractors then if a set can be individuated distributively by any boolean combination of properties drboolean will find such a combinationproof any boolean expression can be written in conjunctive normal form that is as an intersection of unions of literals theorem 3 follows from the following lemmalemma let cp be a cnf formula whose longest union has a length of n then drboolean will find a description cp that is coextensive with cp in at most n phasesthis is proven by induction on the size of n basic case if n 1 the lemma is equivalent to completeness of drplural the proof of which is analogous to that of the completeness of dr replacing r by s induction step suppose the lemma is true for all n inow consider a cnf cp whose longest union has length i let cp contain m unions of length i namely cp1 n n cpmthen cp can be written as the cnf x n cp1 n n cpm where all the unions in x have length ithe lemma is true for all n i so if x is sent to drboolean then the output is some x such that x x in fewer than i phases so if instead cp is sent to drboolean then after i 1 phases some possibly incomplete description 77 has been found such that 77 c xalso cp c 77phase i inspects all unions of length i including each of cp1 cpmtherefore unless a description coextensive with cp is found before phase i one will be found during phase ito see this suppose the algorithm finds 0 such that 0 cp1 n n cpmthen x n 0 cp but cp c 77 c x therefore also 77 n 0 cpthe gre algorithms discussed in this paper are fairly limited in their aspirationsfor example they do not involve relational descriptions or properties that are vague or context dependent moreover they disregard shades of salience relying instead on a simple dichotomy between those objects that are salient enough and those that are not finally like all other gre algorithms that we are aware of they disregard the generation of descriptions in intensional contexts but even within this limited brief existing algorithms are incompletein particular we have shown dale and reiters incremental algorithm to be intersectively incomplete with respect to attributes that have overlapping values and in some situations where the class of properties is infinitely largefurthermore the incremental algorithm excludes reference to sets and limits itself to purely intersective combinations of atomic properties causing the algorithm to be incomplete with respect to the set of all boolean combinationshaving noted these shortcomings we have modified the incremental algorithm in such a way that these limitations are removedthe result is a set of generalizations of the incremental algorithm for which we have proven completeness under appropriate assumptionsintegration of these different algorithms into one unified algorithm would be a nontrivial enterprise as we have shown in section 43integration with previously proposed extensions of the incremental algorithm would raise further questions stemming from the fact that our descriptions are structurally complexfor example consider the treatment of relational propertieswhich is better adding a relational property to a given incomplete description or adding a negated property making informed decisions about such questions with proper attention to their combined effects is a difficult task that is perhaps best tackled using the graphtheoretical approach outlined by krahmer van erk and verleg their approach is specifically suitable for accommodating different gre algorithms and treats relations in the same way as properties brevitywe have assumed that on the whole descriptions ought to be as brief as they can as long as they are uniquely identifyingbut in fact a description can contain much more than is logically necessary for identification even beyond the redundancies allowed by the incremental algorithmlogically superfluous properties can for example be motivated by overloading if they serve communicative purposes other than identification a description may also contain fewer properties than would be necessary for identificationfor example when no distinguishing description existsa nondistinguishing description may take the form either of a definite description or of an indefinite description in both cases the description may be useful even though it fails to be distinguishingtractabilitycomputational tractability has also been paramount in our explorationsthere is no agreement on the extent to which computational linguists should worry about the computational complexity of algorithms or about the precise way in which complexity is most relevant far from aiming to speak the last word on these issues the material discussed here does she would some light on themfor example even a fast algorithm can require a large number of calculations in which case a solution may never be found in the case of gre this happens when the set of distractors or the set of properties becomes extremely large conversely a complex algorithm can be safe to use if the domain is small this may be achieved by putting a bound on the size of the search space and this may be justifiable on empirical grounds one might on the other hand argue that bounding does not eliminate the disadvantages of an otherwise intractable algorithm because the true nature of an algorithm is best revealed by considering how it operates on unlimited cases be this as it may we believe that complexity theory can offer valuable insights into the structure of gre algorithms and that the growing attention to complexity in this area is a healthy development even if the practical implications are not always straightforwardrecent work also highlights an interesting mirror image of gre complexity a logically superfluous property may make it easier for the reader to find the referent van deemter generating referring expressions an interesting class of cases is explored in paraboni which focuses on descriptions of document partsconsider the description the medicine depicted in section 23if section 2 happens to contain only one figure then the description the medicine depicted in section 2 would have been logically sufficient but this description would have made it necessary for the reader in the worst case to search through all of section 2 making it less usefulexamples of this kind suggest that gre should also take the computational complexity of interpretation into accountexperimental research on minimal cooperative effort points in the same directionthanks are due to robert dale magnus halldorsson emiel krahmer paul piwek richard power ehud reiter and matthew stone for useful discussionshelpful comments from the reviewers of computational linguistics are also gratefully acknowledged
J02-1003
generating referring expressions boolean extensions of the incremental algorithmthis paper brings a logical perspective to the generation of referring expressions addressing the incompleteness of existing algorithms in this areaafter studying references to individual objects we discuss references to sets including boolean descriptions that make use of negated and disjoined propertiesto guarantee that a distinguishing description is generated whenever such descriptions exist the paper proposes generalizations and extensions of the incremental algorithm of dale and reiter
classbased probability estimation using a semantic hierarchy this article concerns the estimation of a particular kind of probability namely the probability of a noun sense appearing as a particular argument of a predicate in order to overcome the accompanying sparsedata problem the proposal here is to define the probabilities in terms of senses from a semantic hierarchy and exploit the fact that the senses can be grouped into classes consisting of semantically similar senses there is a particular focus on the problem of how to determine a suitable class for a given sense or alternatively how to determine a suitable level of generalization in the hierarchy a procedure is developed that uses a chisquare test to determine a suitable level of generalization in order to test the performance of the estimation method a pseudodisambiguation task is used together with two alternative estimation methods each method uses a different generalization procedure the first alternative uses the minimum description length principle and the second uses resniks measure of selectional preference in addition the performance of our method is investigated using both the standard pearson chisquare statistic and the loglikelihood chisquare statistic this article concerns the estimation of a particular kind of probability namely the probability of a noun sense appearing as a particular argument of a predicatein order to overcome the accompanying sparsedata problem the proposal here is to define the probabilities in terms of senses from a semantic hierarchy and exploit the fact that the senses can be grouped into classes consisting of semantically similar sensesthere is a particular focus on the problem of how to determine a suitable class for a given sense or alternatively how to determine a suitable level of generalization in the hierarchya procedure is developed that uses a chisquare test to determine a suitable level of generalizationin order to test the performance of the estimation method a pseudodisambiguation task is used together with two alternative estimation methodseach method uses a different generalization procedure the first alternative uses the minimum description length principle and the second uses resniks measure of selectional preferencein addition the performance of our method is investigated using both the standard pearson chisquare statistic and the loglikelihood chisquare statisticthis article concerns the problem of how to estimate the probabilities of noun senses appearing as particular arguments of predicatessuch probabilities can be useful for a variety of natural language processing tasks such as structural disambiguation and statistical parsing word sense disambiguation anaphora resolution and language modelingto see how such knowledge can be used to resolve structural ambiguities consider the following prepositional phrase attachment ambiguity fred ate strawberries with a spoonthe ambiguity arises because the prepositional phrase with a spoon can attach to either strawberries or atethe ambiguity can be resolved by noting that the correct sense of spoon is more likely to be an argument of atewith than strawberrieswith the problem with estimating a probability model defined over a large vocabulary of predicates and noun senses is that this involves a huge number of parameters which results in a sparsedata problemin order to reduce the number of parameters we propose to define a probability model over senses in a semantic hierarchy and to exploit the fact that senses can be grouped into classes consisting of semantically similar sensesthe assumption underlying this approach is that the probability of a particular noun sense can be approximated by a probability based on a suitably chosen classfor example it seems reasonable to suppose that the probability of chicken appearing as an object of the verb eat can be approximated in some way by a probability based on a class such as foodthere are two elements involved in the problem of using a class to estimate the probability of a noun sensefirst given a suitably chosen class how can that class be used to estimate the probability of the senseand second given a particular noun sense how can a suitable class be determinedthis article offers novel solutions to both problems and there is a particular focus on the second question which can be thought of as how to find a suitable level of generalization in the hierarchy1 the semantic hierarchy used here is the noun hierarchy of wordnet version 16previous work has considered how to estimate probabilities using classes from wordnet in the context of acquiring selectional preferences and this previous work has also addressed the question of how to determine a suitable level of generalization in the hierarchyli and abe use the minimum description length principle to obtain a level of generalization and resnik uses a simple technique based on a statistical measure of selectional preferencewe compare our estimation method with those of resnik and li and abe using a pseudodisambiguation taskour method outperforms these alternatives on the pseudodisambiguation task and an analysis of the results shows that the generalization methods of resnik and li and abe appear to be overgeneralizing at least for this tasknote that the problem being addressed here is the engineering problem of estimating predicate argument probabilities with the aim of producing estimates that will be useful for nlp applicationsin particular we are not addressing the problem of acquiring selectional restrictions in the way this is usually construed the purpose of using a semantic hierarchy for generalization is to overcome the sparse data problem rather than find a level of abstraction that best represents the selectional restrictions of some predicatethis point is considered further in section 5the next section describes the noun hierarchy from wordnet and gives a more precise description of the probabilities to be estimatedsection 3 shows how a class from wordnet can be used to estimate the probability of a noun sensesection 4 shows how a chisquare test is used as part of the generalization procedure and section 5 describes the generalization proceduresection 6 describes the alternative classbased estimation methods used in the pseudodisambiguation experiments and section 7 presents those experimentsthe noun hierarchy of wordnet consists of senses or what miller calls lexicalized concepts organized according to the isakindof relationnote that we are using concept to refer to a lexicalized concept or sense and not to a set of senses we use class to refer to a set of sensesthere are around 66000 different concepts in the noun hierarchy of wordnet version 16a concept in wordnet is represented by a synset which is the set of synonymous words that can be used to denote that conceptfor example the synset for the concept 2 is cocaine cocain coke snow c let syn be the synset for concept c and let cn c n e syn be the set of concepts that can be denoted by noun n the hierarchy has the structure of a directed acyclic graph where the edges of the graph constitute what we call the directisa relationlet isa be the transitive reflexive closure of directisa then c isa c implies c is a kind of c if c isa c then c is a hypernym of c and c is a hyponym of c in fact the hierarchy is not a single hierarchy but instead consists of nine separate subhierarchies each headed by the most general kind of concept such as and for the purposes of this work we add a common root dominating the nine subhierarchies which we denote there are some important points that need to be clarified regarding the hierarchyfirst every concept in the hierarchy has a nonempty synset even the most general concepts such as can be denoted by some noun the synset for is entity something second there is an important distinction between an individual concept and a set of conceptsfor example the individual concept should not be confused with the set or class consisting of concepts denoting kinds of entitiesto make this distinction clear we use c c c isa c to denote the set of concepts dominated by concept c including c itselffor example is the set consisting of those concepts corresponding to kinds of animals itselfthe probability of a concept appearing as an argument of a predicate is written p where c is a concept in wordnet v is a predicate and r is an argument position3 the focus in this article is on the arguments of verbs but the techniques discussed can be applied to any predicate that takes nominal arguments such as adjectivesthe probability p is to be interpreted as follows this is the probability that some noun n in syn when denoting concept c appears in position r of verb v the example used throughout the article is p runsubj which is the conditional probability that some noun in the synset of when denoting the concept appears in the subject position of the verb runnote that in practice no distinction is made between the different senses of a verb and that each use of a noun is assumed to correspond to exactly one concept4this section explains how a set of concepts or class from wordnet can be used to estimate the probability of an individual conceptmore specifically we explain how a set of concepts c where c is some hypernym of concept c can be used to estimate pone possible approach would be simply to substitute c for the individual concept c this is a poor solution however since p is the conditional probability that some noun denoting a concept in c appears in position r of verb v for example p runsubj is the probability that some noun denoting a kind of animal appears in the subject position of the verb runprobabilities of sets of concepts are obtained by summing over the concepts in the set this means that p runsubj is likely to be much greater than p runsubj and thus is not a good approximation of p runsubjwhat can be done though is to condition on sets of conceptsif it can be shown that p for some hypernym c of c is a reasonable approximation of p then we have a way of estimating pthe probability p can be obtained from p using bayes theorem since p and p are conditioned on the argument slot only we assume these can be estimated satisfactorily using relative frequency estimatesalternatively a standard smoothing technique such as goodturing could be used5 this leaves pcontinuing with the example the proposal is to estimate psubj using a relativefrequency estimate of psubj or an estimate based on a similar suitably chosen classthus assuming this choice of class p runsubj would be approximated as follows the following derivation shows that if p k for each child ci of c and p k then p is also equal to k note that the proof applies only to a tree since the proof assumes that c is partitioned by c and the sets of concepts dominated by each of the daughters of c which is not necessarily true for a directed acyclic graph wordnet is a dag but is a close approximation to a tree and so we assume this will not be a problem in practice6 the derivation in shows how probabilities conditioned on sets of concepts can remain constant when moving up the hierarchy and this suggests a way of finding a suitable set c as a generalization for concept c initially set c equal to c and move up the hierarchy changing the value of c until there is a significant change in pestimates of p for each child ci of c can be compared to see whether p has significantly changed and consider the probabilities p onlynote that this procedure rests on the assumption that p is close to p is equal to p when c is a leaf nodeso when finding a suitable level for the estimation of p eat obj for example we first assume that p obj is a good approximation of p obj and then apply the procedure to p obja feature of the proposed generalization procedure is that comparing probabilities of the form p where c is a class is closely related to comparing ratios of probabilities of the form pp note that for a given verb and argument position p is constant across classesequation is of interest because the ratio pp can be interpreted as a measure of association between the verb v and class c this ratio is similar to pointwise mutual information and also forms part of resniks association score which will be introduced in section 6thus the generalization procedure can be thought of as one that finds homogeneous areas of the hierarchy that is areas consisting of classes that are associated to a similar degree with the verb finally we note that the proposed estimation method does not guarantee that the estimates form a probability distribution over the concepts in the hierarchy and so a normalization factor is required we use psc to denote an estimate obtained using our method and c v r to denote the class chosen for concept c in position r of verb v pˆ denotes a relative frequency estimate and c denotes the set of concepts in the hierarchybefore providing the details of the generalization procedure we give the relativefrequency estimates of the relevant probabilities and deal with the problem of ambiguous datathe relativefrequency estimates are as follows where f is the number of triples in the data in which n is being used to denote c and v is the set of verbs in the datathe problem is that the estimates are defined in terms of frequencies of senses whereas the data are assumed to be in the form of triples a noun verb and argument positionall the data used in this work have been obtained from the british national corpus using the system of briscoe and carroll which consists of a shallowparsing component that is able to identify verbal argumentswe take a simple approach to the problem of estimating the frequencies of senses by distributing the count for each noun in the data evenly among all senses of the noun where fˆ is an estimate of the number of times that concept c appears in position r of verb v and cn is the cardinality of cnthis is the approach taken by li and abe ribas and mccarthy 7 resnik explains how this apparently crude technique works surprisingly wellalternative approaches are described in clark and weir abney and light and ciaramita and johnson in this section we show how to test whether p changes significantly when considering a node higher in the hierarchyconsider the problem of deciding whether psubj is a good approximation of psubj is the parent of in wordnetto do this the probabilities p are compared using a chisquare test where the ci are the children of in this case the null hypothesis of the test is that the probabilities p are the same for each child ciby judging the strength of the evidence against the null hypothesis how similar the true probabilities are likely to be can be determinedif the test indicates that the probabilities are sufficiently unlikely to be the same then the null hypothesis is rejected and the conclusion is that p subj is not a good approximation of psubjan example contingency table based on counts obtained from a subset of the bnc using the system of briscoe and carroll is given in table 1one column contains estimates of counts arising from concepts in ci appearing in the subject position of the verb run fˆ a second column presents estimates of counts arising from concepts in ci appearing in the subject position of a verb other than runthe figures in brackets are the expected values if the null hypothesis is truethere is a choice of which statistic to use in conjunction with the chisquare testthe usual statistic encountered in textbooks is the pearson chisquare statistic denoted x2 where oij is the observed value for the cell in row i and column j and eij is the corresponding expected valuean alternative statistic is the loglikelihood chisquare statistic denoted g28 g2 2 oij ij oij loge eij the two statistics have similar values when the counts in the contingency table are large the statistics behave differently however when the table contains low counts and since corpus data are likely to lead to some low counts the question of which statistic to use is an important onedunning argues for the use of g2 rather than x2 based on an analysis of the sampling distributions of g2 and x2 and results obtained when using the statistics to acquire highly associated bigramswe consider dunnings analysis at the end of this section and the question of whether to use g2 or x2 will be discussed further therefor now we continue with the discussion of how the chisquare test is used in the generalization procedurefor table 1 the value of g2 is 38 and the value of x2 is 25assuming a level of significance of α 005 the critical value is 126 thus for this α value the null hypothesis would not be rejected for either statistic and the conclusion would be that there is no reason to suppose that p subj is not a reasonable approximation of p subjas a further example table 2 gives counts for the children of in the object position of drinkagain the counts have been obtained from a subset of the bnc using the system of briscoe and carrollnot all the sets dominated by the children of are shown as some such as never appear in the object position of a verb in the datathis example is designed to show a case in which the null hypothesis is rejectedthe value of g2 for this table is 290 and the value of x2 is 212so for g2 even if an α value as low as 00005 were being used the null hypothesis would still be rejectedfor x2 the null hypothesis is rejected for α values greater than 0005this seems reasonable since the probabilities associated with the children of and the object position of drink would be expected to show a lot of variation across the childrena key question is how to select the appropriate value for αone solution is to treat α as a parameter and set it empirically by taking a heldout test set and choosing the value of α that maximizes performance on the relevant taskfor example clark and weir describes a prepositional phrase attachment algorithm that employs probability estimates obtained using the wordnet method described hereto set the value of α the performance of the algorithm on a development set could be compared across different values of α and the value that leads to the best performance could be chosennote that this approach sets no constraints on the value of α the value could be as high as 0995 or as low as 00005 depending on the particular applicationthere may be cases in which the conditions for the appropriate application of a chisquare test are not metone condition that is likely to be violated is the requirement that expected values in the contingency table not be too smallone response to this problem is to apply some kind of thresholding and either ignore counts below the threshold or apply the test only to tables that do not contain low countsribas li and abe mccarthy and wagner all use some kind of thresholding when dealing with counts in the hierarchy another approach would be to use fishers exact test which can be applied to tables regardless of the size of the counts they containthe main problem with this test is that it is computationally expensive especially for large contingency tableswhat we have found in practice is that applying the chisquare test to tables dominated by low counts tends to produce an insignificant result and the null hypothesis is not rejectedthe consequences of this for the generalization procedure are that lowcount tables tend to result in the procedure moving up to the next node in the hierarchybut given that the purpose of the generalization is to overcome the sparsedata problem moving up a node is desirable and therefore we do not modify the test for tables with low countsthe final issue to consider is which chisquare statistic to usedunning argues for the use of g2 rather than x2 based on the claim that the sampling distribution of g2 approaches the true chisquare distribution quicker than the sampling distribution of x2however agresti makes the opposite claim the sampling distributions of x2 and g2 get closer to chisquared as the sample size n increasesthe convergence is quicker for x2 than g2 in addition pedersen questions whether one statistic should be preferred over the other for the bigram acquisition task and cites cressie and read who argue that there are some cases where the pearson statistic is more reliable than the loglikelihood statisticfinally the results of the pseudodisambiguation experiments presented in section 7 are at least as good if not better when using x2 rather than g2 and so we conclude that the question of which statistic to use should be answered on a per application basisthe procedure for finding a suitable class c to generalize concept c in position r of verb v works as follows since the chosen hypernym sits at the top of the similarity classinitially concept c is assigned to a variable topthen by working up the hierarchy successive hypernyms of c are assigned to top and this process continues until the probabilities associated with the sets of concepts dominated by top and the siblings of top are significantly differentonce a node is reached that results in a significant result for the chisquare test the procedure stops and top is returned as topin cases where a concept has more than one parent the parent is chosen that results in the lowest value of the chisquare statistic as this indicates the probabilities are the most similarthe set top is the similarity class of c for verb v and position r figure 1 gives an algorithm for determining topfigure 2 gives an example of the procedure at workhere top stir obj is being determinedthe example is based on data from a subset of the bnc with 303 cases of an argument in the object position of stirthe g2 statistic is used together with an α value of 005initially top is set to and the probabilities corresponding to the children of are compared pobj pobj pobj and so on for the rest of the childrenthe chisquare test results in a g2 value of 145 compared to a critical value of 558since g2 is less than the critical value the procedure moves up to the next nodethis process continues until a significant result is obtained which first occurs at when comparing the children of thus is the chosen level of generalizationnow we show how the chosen level of generalization varies with α and how it varies with the size of the data seta note of clarification is required before presenting the resultsin related work on acquiring selectional preferences an example generalization determining top stir obj1997 li and abe 1998 wagner 2000 the level of generalization is often determined for a small number of handpicked verbs and the result compared with the researchers intuition about the most appropriate level for representing a selectional preferenceaccording to this approach if were chosen to represent in the object position of eat this might be considered an undergeneralization since might be considered more appropriatefor this work we argue that such an evaluation is not appropriate since the purpose of this work is probability estimation the most appropriate level is the one that leads to the most accurate estimate and this may or may not agree with intuitionfurthermore we show in section 7 that to generalize unnecessarily can be harmful for some tasks if we already have lots of data regarding why generalize any higherthus the purpose of this section is not to show that the acquired levels are correct but simply to show how the levels vary with α and the sample sizeto show how the level of generalization varies with changes in α top was determined for a number of handpicked triples over a range of values for αthe triples were chosen to give a range of strongly and weakly selecting verbs and a range of verb frequenciesthe data were again extracted from a subset of the bnc using the system of briscoe and carroll and the g2 statistic was used in the chisquare testthe results are shown in table 3the number of times the verb occurred with some object is also given in the tablethe results suggest that the generalization level becomes more specific as α increasesthis is to be expected since given a contingency table chosen at random a higher value of α is more likely to lead to a significant result than a lower value of αwe also see that for some cases the value of α has little effect on the levelwe would expect there to be less change in the level of generalization for strongly selecting verbs such as drink and eat and a greater range of levels for weakly selecting verbs such as seethis is because any significant difference in probabilities is likely to be more marked for a strongly selecting verb and likely to be significant over a wider range of α valuesthe table only provides anecdotal evidence but provides some support to this argumentto investigate more generally how the level of generalization varies with changes in α and also with changes in sample size we took 6 000 triples and calculated the difference in depth between c and top for each triplethe 6 000 triples were taken from the first experimental test set described in section 7 and the training data from this experiment were used to provide the countsan average difference in depth was then calculatedto give an example of how the difference in depth was calculated suppose generalized to via and in this case the difference would be threethe results for various levels of α and different sample sizes are shown in table 4the figures in each column arise from using the contingency tables based on the complete training data but with each count in the table multiplied by the percentage at the head of the columnthus the 50 column is based on contingency tables in which each original count is multiplied by 50 which is equivalent to using a sample onehalf the size of the original training setreading across a row shows how the generalization varies with sample size and reading down a column shows how it varies with αthe results show clearly that the extent of generalization decreases with an increase in the value of α supporting the trend observed in table 3the results also show that the extent of generalization increases with a decrease in sample sizeagain this is to be expected since any difference in probability estimates is less likely to be significant for tables with low countsthe approaches used for comparison are that of resnik subsequently developed by ribas and that of li and abe which has been adopted by mccarthy these have been chosen because they directly address the question of how to find a suitable level of generalization in wordnetthe first alternative uses the association score which is a measure of how well a set of concepts c satisfies the selectional preferences of a verb v for an argument position r9 an estimate of the association score ˆa can be obtained using relative frequency estimates of the probabilitiesthe key question is how to determine a suitable level of generalization for concept c or alternatively how to find a suitable class to represent concept c resniks solution to this problem is to choose the class that maximizes the association scoreit is not clear that the class with the highest association score is always the most appropriate level of generalizationfor example this approach does not always generalize appropriately for arguments that are negatively associated with some verbto see why consider the problem of deciding how well the concept satisfies the preferences of the verb eat for its objectsince locations are not the kinds of things that are typically eaten a suitable level of generalization would correspond to a class that has a low association score with respect to eathowever is a kind of in wordnet10 and choosing the class with the highest association score is likely to produce as the chosen classthis is a problem because the association score of with respect to eat may be too high to reflect the fact that is a very unlikely object of the verbnote that the solution to the verticalambiguity problem presented in the previous sections is able to generalize appropriately in such casescontinuing with the eat example our generalization procedure is unlikely to get as high as since the probabilities corresponding to the daughters of are likely to be very different with respect to the object position of eatthe second alternative uses the minimum description length principleli and abe use mdl to select a set of classes from a hierarchy together with their associated probabilities to represent the selectional preferences of a particular verbthe preferences and classbased probabilities are then used to estimate probabilities of the form p where n is a noun v is a verb and r is an argument slotli and abes application of mdl requires the hierarchy to be in the form of a thesaurus in which each leaf node represents a noun and internal nodes represent the class of nouns that the node dominatesthe hierarchy is also assumed to be in the form of a treethe classbased models consist of a partition of the set of nouns and a probability associated with each class in the partitionthe probabilities are the conditional probabilities of each class given the relevant verb and argument positionli and abe refer to such a partition as a cut and the cut together with the probabilities as a tree cut model the probabilities of the classes in a cut p satisfy the following constraint possible cut returned by mdlin order to determine the probability of a noun the probability of a class is assumed to be distributed uniformly among the members of that class since wordnet is a hierarchy with noun senses rather than nouns at the nodes li and abe deal with the issue of word sense ambiguity using the method described in section 3 by dividing the count for a noun equally among the concepts whose synsets contain the nounalso since wordnet is a dag li and abe turn wordnet into a tree by copying each subgraph with multiple parentsand so that each noun in the data appears at a leaf node li and abe remove those parts of the hierarchy dominated by a noun in the data an example cut showing part of the wordnet hierarchy is shown in figure 3 this is a possible cut for the object position of the verb eat and the cut consists of the following classes since the class in the cut containing is the probability p eatobj would be estimated as p eat objsimilarly since the class in the cut containing is the probability p eat obj would be estimated as p eat objthe uniformdistribution assumption means that cuts close to the root of the hierarchy result in a greater smoothing of the probability estimates than cuts near the leavesthus there is a tradeoff between choosing a model that has a cut near the leaves which is likely to overfit the data and a more general model near the root which is likely to underfit the datamdl looks ideally suited to the task of model selection since it is designed to deal with precisely this tradeoffthe simplicity of a model is measured using the model description length which is an informationtheoretic term and denotes the number of bits required to encode the modelthe fit to the data is measured using the data description length which is the number of bits required to encode the data the overall description length is the sum of the model description length and the data description length and the mdl principle is to select the model with the shortest description lengthwe used mccarthys implementation of mdlso that every noun is represented at a leaf node mccarthy does not remove parts of the hierarchy as li and abe do but instead creates new leaf nodes for each synset at an internal nodemccarthy also does not transform wordnet into a tree which is strictly required for li and abes application of mdlthis did create a problem with overgeneralization many of the cuts returned by mdl were overgeneralizing at the nodethe reason is that which is close to and dominated by has two parents and this daglike property was responsible for the overgeneralization and so we removed the link between and this appeared to solve the problem and the results presented later for the average degree of generalization do not show an overgeneralization compared with those given in li and abe the task we used to compare the classbased estimation techniques is a decision task previously used by pereira tishby and lee and rooth et al the task is to decide which of two verbs v and v is more likely to take a given noun n as an objectthe test and training data were obtained as followsa number of verbdirect object pairs were extracted from a subset of the bnc using the system of briscoe and carrollall those pairs containing a noun not in wordnet were removed and each verb and argument was lemmatizedthis resulted in a data set of around 13 million pairsto form a test set 3000 of these pairs were randomly selected such that each selected pair contained a fairly frequent verbeach instance of a selected pair was then deleted from the data to ensure that the test data were unseenthe remaining pairs formed the training datato complete the test set a further fairly frequent verb v was randomly chosen for each pairthe random choice was made according to the verbs frequency in the original data set subject to the condition that the pair did not occur in the training datagiven the set of triples the task is to decide whether or is the correct pair11 we acknowledge that the task is somewhat artificial but pseudodisambiguation tasks of this kind are becoming popular in statistical nlp because of the ease with which training and test data can be createdwe also feel that the pseudodisambiguation task is useful for evaluating the different estimation methods since it directly addresses the question of how likely a particular predicate is to take a given noun as an argumentan evaluation using a pp attachment task was attempted in clark and weir but the evaluation was limited by the relatively small size of the penn treebank11 we note that this procedure does not guarantee that the correct pair is more likely than the incorrect pair because of noise in the data from the parser and also because a highly plausible incorrect pair could be generated by chanceusing our approach the disambiguation decision for each triple was made according to the following procedure if max psc max psc if n has more than one sense the sense is chosen that maximizes the relevant probability estimate this explains the maximization over cnthe probability estimates were obtained using our classbased method and the g2 statistic was used for the chisquare testthis procedure was also used for the mdl alternative but using the mdl method to estimate the probabilitiesusing the association score for each test triple the decision was made according to the following procedure then choose else choose at random we use h to denote the set consisting of the hypernyms of c the inner maximization is over h assuming c is the chosen sense of n which corresponds to resniks method of choosing a set to represent c the outer maximization is over the senses of n cn which determines the sense of n by choosing the sense that maximizes the association scorethe first set of results is given in table 5our technique is referred to as the similarity class technique and the approach using the association score is referred to as assoc the results are given for a range of α values and demonstrate clearly that the performance of similarity class varies little with changes in α and that similarity class outperforms both mdl and assoc12 we also give a score for our approach using a simple generalization procedure which we call low class the procedure is to select the first class that has a count greater than zero which is likely to return a low level of generalization on the wholethe results show that our generalization technique only narrowly outperforms the simple alternativenote that although low class is based on a very simple generalization method the estimation method is still using our classbased technique by applying bayes theorem and conditioning on a class as described in section 3 the difference is in how the class is chosento investigate the results we calculated the average number of generalized levels for each approachthe number of generalized levels for a concept c is the difference in depth between c and top as explained in section 5for each test case the number of generalized levels for both verbs v and v was calculated but only for the chosen sense of n the results are given in the third column of table 5 and demonstrate clearly that both mdl and assoc are generalizing to a greater extent than similarity classthese results suggest that mdl and assoc are overgeneralizing at least for the purposes of this taskto investigate why the value for α had no impact on the results we repeated the experiment but with one fifth of the dataa new data set was created by taking every fifth pair of the original 13 million pairsa test set of 3000 triples was created from this new data set as before but this time only verbs that occurred between 100 and 1000 times were consideredthe results using these test and training data are given in table 6these results show a variation in performance across values for α with an optimal performance when α is around 075but even with this variation similarity class is still outperforming mdl and assoc across the whole range of α valuesnote that the α values corresponding to the lowest scores lead to a significant amount of generalization which provides additional evidence that mdl and assoc are overgeneralizing for this taskthe lowclass method scores highly for this data set also but given that the task is one that apparently favors a low level of generalization the high score is not too surprisingas a final experiment we compared the task performance using the x2 rather than g2 statistic in the chisquare testthe results are given in table 7 for the complete data set13 the figures in brackets give the average number of generalized levelsthe x2 statistic is performing at least as well as g2 and the results show that the average level of generalization is slightly higher for g2 than x2this suggests a possible explanation for the results presented here and those in dunning that the x2 statistic provides a less conservative test when counts in the contingency table are lowa less conservative test is better suited to the pseudodisambiguation task since it results in a lower level of generalization on the whole which is good for this taskin contrast the task that dunning considers the discovery of bigrams is better served by a more conservative testwe have presented a classbased estimation method that incorporates a procedure for finding a suitable level of generalization in wordnetthis method has been shown to provide superior performance on a pseudodisambiguation task compared with two alternative approachesan analysis of the results has shown that the other approaches appear to be overgeneralizing at least for this taskone of the features of the generalization procedure is the way that α the level of significance in the chisquare test is treated as a parameterthis allows some control over the extent of generalization which can be tailored to particular taskswe have also shown that the task performance is at least as good when using the pearson chisquare statistic as when using the loglikelihood chisquare statisticthere are a number of ways in which this work could be extendedone possibility would be to use all the classes dominated by the hypernyms of a concept rather than just one to estimate the probability of the conceptan estimate would be obtained for each hypernym and the estimates combined in a linear interpolationan approach similar to this is taken by bikel in the context of statistical parsingthere is still room for investigation of the hiddendata problem when data are used that have not been sense disambiguatedin this article a very simple approach is taken which is to split the count for a noun evenly among the nouns sensesabney and light have tried a more motivated approach using the expectation maximization algorithm but with little successthe approach described in clark and weir is shown in clark to have some impact on the pseudodisambiguation task but only with certain values of the α parameter and ultimately does not improve on the best performancefinally an issue that has not been much addressed in the literature is how the accuracy of classbased estimation techniques compare when automatically acquired classes as opposed to the manually created classes from wordnet are usedthe pseudodisambiguation task described here has also been used to evaluate clustering algorithms but with different data and so it is difficult to compare the resultsa related issue is how the structure of wordnet affects the accuracy of the probability estimateswe have taken the structure of the hierarchy for granted without any analysis but it may be that an alternative design could be more conducive to probability estimationthis article is an extended and updated version of a paper that appeared in the proceedings of naacl 2001the work on which it is based was carried out while the first author was a dphil student at the university of sussex and was supported by an epsrc studentshipwe would like to thank diana mccarthy for suggesting the pseudodisambiguation task and providing the mdl software john carroll for supplying the data and ted briscoe geoff sampson gerald gazdar bill keller ted pedersen and the anonymous reviewers for their helpful commentswe would also like to thank ted briscoe for presenting an earlier version of this article on our behalf at naacl 2001
J02-2003
classbased probability estimation using a semantic hierarchythis article concerns the estimation of a particular kind of probability namely the probability of a noun sense appearing as a particular argument of a predicatein order to overcome the accompanying sparsedata problem the proposal here is to define the probabilities in terms of senses from a semantic hierarchy and exploit the fact that the senses can be grouped into classes consisting of semantically similar sensesthere is a particular focus on the problem of how to determine a suitable class for a given sense or alternatively how to determine a suitable level of generalization in the hierarchya procedure is developed that uses a chisquare test to determine a suitable level of generalizationin order to test the performance of the estimation method a pseudodisambiguation task is used together with two alternative estimation methodseach method uses a different generalization procedure the first alternative uses the minimum description length principle and the second uses resniks measure of selectional preferencein addition the performance of our method is investigated using both the standard pearson chisquare statistic and the loglikelihood chisquare statisticbriefly we populate the wordnet hierarchy based on corpus frequencies and then determine the appropriate probability estimate at each node in the hierarchy by using chi square to determine whether to generalize an estimate to a parent node in the hierarchy
automatic labeling of semantic roles present a system for identifying the semantic relationships or filled by constituents of a sentence within a semantic frame given an input sentence and a target word frame the system labels constituents with either abstract semantic roles such as or more domainspecific semantic roles such as and the system is based on statistical classifiers trained on roughly 50000 sentences that were handannotated with semantic roles by the framenet semantic labeling project we then parsed each training sentence into a syntactic tree and extracted various lexical and syntactic features including the phrase type of each constituent its grammatical function and its position in the sentence these features were combined with knowledge of the predicate verb noun or adjective as well as information such as the prior probabilities of various combinations of semantic roles we used various lexical clustering algorithms to generalize across possible fillers of roles test sentences were parsed were annotated with these features and were then passed through the classifiers our system achieves 82 accuracy in identifying the semantic role of presegmented constituents at the more difficult task of simultaneously segmenting constituents and identifying their semantic role the system achieved 65 precision and 61 recall our study also allowed us to compare the usefulness of different features and feature combination methods in the semantic role labeling task we also explore the integration of role labeling with statistical syntactic parsing and attempt to generalize to predicates unseen in the training data we present a system for identifying the semantic relationships or semantic roles filled by constituents of a sentence within a semantic framegiven an input sentence and a target word and frame the system labels constituents with either abstract semantic roles such as agent or patient or more domainspecific semantic roles such as speaker message and topicthe system is based on statistical classifiers trained on roughly 50000 sentences that were handannotated with semantic roles by the framenet semantic labeling projectwe then parsed each training sentence into a syntactic tree and extracted various lexical and syntactic features including the phrase type of each constituent its grammatical function and its position in the sentencethese features were combined with knowledge of the predicate verb noun or adjective as well as information such as the prior probabilities of various combinations of semantic roleswe used various lexical clustering algorithms to generalize across possible fillers of rolestest sentences were parsed were annotated with these features and were then passed through the classifiersour system achieves 82 accuracy in identifying the semantic role of presegmented constituentsat the more difficult task of simultaneously segmenting constituents and identifying their semantic role the system achieved 65 precision and 61 recallour study also allowed us to compare the usefulness of different features and feature combination methods in the semantic role labeling taskwe also explore the integration of role labeling with statistical syntactic parsing and attempt to generalize to predicates unseen in the training datarecent years have been exhilarating ones for natural language understandingthe excitement and rapid advances that had characterized other languageprocessing tasks such as speech recognition partofspeech tagging and parsing have finally begun to appear in tasks in which understanding and semantics play a greater rolefor example there has been widespread commercial deployment of simple speechbased natural language understanding systems that answer questions about flight arrival times give directions report on bank balances or perform simple financial transactionsmore sophisticated research systems generate concise summaries of news articles answer factbased questions and recognize complex semantic and dialogue structurebut the challenges that lie ahead are still similar to the challenge that the field has faced since winograd moving away from carefully handcrafted domaindependent systems toward robustness and domain independencethis goal is not as far away as it once was thanks to the development of large semantic databases such as wordnet and progress in domainindependent machine learning algorithmscurrent information extraction and dialogue understanding systems however are still based on domainspecific frameandslot templatessystems for booking airplane information use domainspecific frames with slots like orig city dest city or depart time systems for studying mergers and acquisitions use slots like products relationship joint venture company and amount for natural language understanding tasks to proceed beyond these specific domains we need semantic frames and semantic understanding systems that do not require a new set of slots for each new application domainin this article we describe a shallow semantic interpreter based on semantic roles that are less domain specific than to airport or joint venture companythese roles are defined at the level of semantic frames of the type introduced by fillmore which describe abstract actions or relationships along with their participantsfor example the judgement frame contains roles like judge evaluee and reason and the statement frame contains roles like speaker addressee and message as the following examples show these shallow semantic roles could play an important role in information extractionfor example a semantic role parse would allow a system to realize that the ruling that is the direct object of change in plays the same theme role as the ruling that is the subject of change in the fact that semantic roles are defined at the frame level means for example that the verbs send and receive would share the semantic roles defined with respect to a common transfer framesuch common frames might allow a questionanswering system to take a question like and discover that is relevant in constructing an answer to the question this shallow semantic level of interpretation has additional uses outside of generalizing information extraction question answering and semantic dialogue systemsone such application is in word sense disambiguation where the roles associated with a word can be cues to its sensefor example lapata and brew and others have shown that the different syntactic subcategorization frames of a verb such as serve can be used to help disambiguate a particular instance of the wordadding semantic role subcategorization information to this syntactic information could extend this idea to use richer semantic knowledgesemantic roles could also act as an important intermediate representation in statistical machine translation or automatic text summarization and in the emerging field of text data mining finally incorporating semantic roles into probabilistic models of language may eventually yield more accurate parsers and better language models for speech recognitionthis article describes an algorithm for identifying the semantic roles filled by constituents in a sentencewe apply statistical techniques that have been successful for the related problems of syntactic parsing partofspeech tagging and word sense disambiguation including probabilistic parsing and statistical classificationour statistical algorithms are trained on a handlabeled data set the framenet database the framenet database defines a tag set of semantic roles called frame elements and included at the time of our experiments roughly 50000 sentences from the british national corpus handlabeled with these frame elementsthis article presents our system in stages beginning in section 2 with a more detailed description of the data and the set of frame elements or semantic roles usedwe then introduce the statistical classification technique used and examine in turn the knowledge sources of which our system makes usesection 4 describes the basic syntactic and lexical features used by our system which are derived from a penn treebankstyle parse of individual sentences to be analyzedwe break our task into two subproblems finding the relevant sentence constituents and giving them the correct semantic labels section 6 adds higherlevel semantic knowledge to the system attempting to model the selectional restrictions on role fillers not directly captured by lexical statisticswe compare handbuilt and automatically derived resources for providing this informationsection 7 examines techniques for adding knowledge about systematic alternations in verb argument structure with sentencelevel featureswe combine syntactic parsing and semantic role identification into a single probability model in section 8section 9 addresses the question of generalizing statistics from one target predicate to another beginning with a look at domainindependent thematic roles in section 91finally we draw conclusions and discuss future directions in section 10semantic roles are one of the oldest classes of constructs in linguistic theory dating back thousands of years to paninis karaka theory longevity in this case begets variety and the literature records scores of proposals for sets of semantic rolesthese sets of roles range from the very specific to the very general and many have been used in computational implementations of one type or anotherat the specific end of the spectrum are domainspecific roles such as the from airport to airport or depart time discussed above or verbspecific roles such as eater and eaten for the verb eatthe opposite end of the spectrum consists of theories with only two protoroles or macroroles protoagent and protopatient in between lie many theories with approximately 10 roles such as fillmores list of nine agent experiencer instrument object source goal location time and path1 sample domains and frames from the framenet lexiconmany of these sets of roles have been proposed by linguists as part of theories of linking the part of grammatical theory that describes the relationship between semantic roles and their syntactic realizationother sets have been used by computer scientists in implementing natural language understanding systemsas a rule the more abstract roles have been proposed by linguists who are more concerned with explaining generalizations across verbs in the syntactic realization of their arguments whereas the more specific roles have more often been proposed by computer scientists who are more concerned with the details of the realization of the arguments of specific verbsthe framenet project proposes roles that are neither as general as the 10 abstract thematic roles nor as specific as the thousands of potential verbspecific rolesframenet roles are defined for each semantic framea frame is a schematic representation of situations involving various participants props and other conceptual roles for example the frame conversation shown in figure 1 is invoked by the semantically related verbs argue banter debate converse and gossip as well as the nouns dispute discussion and tiff and is defined as follows the roles defined for this frame and shared by all its lexical entries include protagonist1 and protagonist2 or simply protagonists for the participants in the conversation as well as medium and topicsimilarly the judgment frame mentioned above has the roles judge evaluee and reason and is invoked by verbs such as blame admire and praise and nouns such as fault and admirationwe refer to the roles for a given frame as frame elementsa number of handannotated examples from the judgment frame are included below to give a flavor of the framenet database defining semantic roles at this intermediate frame level helps avoid some of the wellknown difficulties of defining a unique small set of universal abstract thematic roles while also allowing some generalization across the roles of different verbs nouns and adjectives each of which adds semantics to the general frame or highlights a particular aspect of the frameone way of thinking about traditional abstract thematic roles such as agent and patient in the context of framenet is to conceive them as frame elements defined by abstract frames such as action and motion at the top of an inheritance hierarchy of semantic frames the examples above illustrate another difference between frame elements and thematic roles as commonly described in the literaturewhereas thematic roles tend to be arguments mainly of verbs frame elements can be arguments of any predicate and the framenet database thus includes nouns and adjectives as well as verbsthe examples above also illustrate a few of the phenomena that make it hard to identify frame elements automaticallymany of these are caused by the fact that there is not always a direct correspondence between syntax and semanticswhereas the subject of blame is often the judge the direct object of blame can be an evaluee or a reason the identity of the judge can also be expressed in a genitive pronoun or even an adjective the corpus used in this project is perhaps best described in terms of the methodology used by the framenet teamwe outline the process here for more detail see johnson et al as the first step semantic frames were defined for the general domains chosen the frame elements or semantic roles for participants in a frame were defined and a list of target words or lexical predicates whose meaning includes aspects of the frame was compiled for each frameexample sentences were chosen by searching the british national corpus for instances of each target wordseparate searches were performed for various patterns over lexical items and partofspeech sequences in the target words context producing a set of subcorpora for each target word designed to capture different argument structures and ensure that some examples of each possible syntactic usage of the target word would be included in the final databasethus the focus of the project was on completeness of examples for lexicographic needs rather than on statistically representative datasentences from each subcorpus were then annotated by hand marking boundaries of each frame element expressed in the sentence and assigning tags for the annotated constituents frame semantic role syntactic category and grammatical function in relation to the target word in the final phase of the process the annotated sentences for each target word were checked for consistencyin addition to the tags just mentioned the annotations include certain other information which we do not make use of in this work such as word sense tags for some target words and tags indicating metaphoric usagestests of interannotator agreement were performed for data from a small number of predicates before the final consistency checkinterannotator agreement at the sentence level including all frame element judgments and boundaries for one predicate varied from 66 to 82 depending on the predicatethe kappa statistic varied from 67 to 82because of the large number of possible categories when boundary judgments are considered kappa is nearly identical to the interannotator agreementthe system described in this article correctly identifies all frame elements in 38 of test sentencesalthough this 38 is not directly comparable to the 6682 interannotator agreements it is clear that the performance of our system still falls significantly short of human performance on the taskthe british national corpus was chosen as the basis of the framenet project despite differences between british and american usage because at 100 million words it provides the largest corpus of english with a balanced mixture of text genresthe british national corpus includes automatically assigned syntactic partofspeech tags for each word but does not include full syntactic parsesthe framenet annotators did not make use of or produce a complete syntactic parse of the annotated sentences although some syntactic information is provided by the grammatical function and phrase type tags of the annotated frame elementsthe preliminary version of the framenet corpus used for our experiments contained 67 frame types from 12 general semantic domains chosen for annotationa complete list of the semantic domains represented in our data is shown in table 1 along with representative frames and predicateswithin these frames examples of a total of 1462 distinct lexical predicates or target words were annotated 927 verbs 339 nouns and 175 adjectivesthere are a total of 49013 annotated sentences and 99232 annotated frame elements how important is the particular set of semantic roles that underlies our systemfor example could the optimal choice of semantic roles be very dependent on the application that needs to exploit their informationalthough there may well be applicationspecific constraints on semantic roles our semantic role classifiers seem in practice to be relatively independent of the exact set of semantic roles under considerationsection 91 describes an experiment in which we collapsed the framenet roles into a set of 18 abstract thematic roleswe then retrained our classifier and achieved roughly comparable results overall performance was 821 for abstract thematic roles compared to 804 for framespecific rolesalthough this does not show that the detailed set of semantic roles is irrelevant it does suggest that our statistical classification algorithm at least is relatively robust to even quite large changes in role identitiesassignment of semantic roles is an important part of language understanding and the problem of how to assign such roles has been attacked by many computational systemstraditional parsing and understanding systems including implementations of unificationbased grammars such as headdriven phrase structure grammar rely on handdeveloped grammars that must anticipate each way in which semantic roles may be realized syntacticallywriting such grammars is time consuming and typically such systems have limited coveragedatadriven techniques have recently been applied to templatebased semantic interpretation in limited domains by shallow systems that avoid complex feature structures and often perform only shallow syntactic analysisfor example in the context of the air traveler information system for spoken dialogue miller et al computed the probability that a constituent such as atlanta filled a semantic slot such as destination in a semantic frame for air travelin a datadriven approach to information extraction riloff builds a dictionary of patterns for filling slots in a specific domain such as terrorist attacks and riloff and schmelzenbach extend this technique to derive automatically entire case frames for words in the domainthese last systems make use of a limited amount of hand labor to accept or reject automatically generated hypothesesthey show promise for a more sophisticated approach to generalizing beyond the relatively small number of frames considered in the tasksmore recently a domainindependent system has been trained by blaheta and charniak on the function tags such as manner and temporal included in the penn treebank corpussome of these tags correspond to framenet semantic roles but the treebank tags do not include all the arguments of most predicatesin this article we aim to develop a statistical system for automatically learning to identify all semantic roles for a wide variety of predicates in unrestricted textin this section we describe the first basic version of our statistically trained system for automatically identifying frame elements in textthe system will be extended in later sectionswe first describe in detail the sentence and constituentlevel features on which our system is based and then use these features to calculate probabilities for predicting frame element labels in section 42in this section we give results for a system that labels roles using the humanannotated boundaries for the frame elements within the sentence we return to the question of automatically identifying the boundaries in section 5our system is a statistical one based on training a classifier on a labeled training set and testing on a heldout portion of the datathe system is trained by first using an automatic syntactic parser to analyze the 36995 training sentences matching annotated frame elements to parse constituents and extracting various features from the string of words and the parse treeduring testing the parser is run on the test sentences and the same features are extractedprobabilities for each possible semantic role r are then computed from the featuresthe probability computation is described in the next section here we discuss the features usedthe features used represent various aspects of the syntactic structure of the sentence as well as lexical informationthe relationship between such surface manifestations and semantic roles is the subject of linking theory in general linking theory argues that the syntactic realization of arguments of a predicate is predictable from semantics exactly how this relationship works however is the subject of much debateregardless of the underlying mechanisms used to generate syntax from semantics the relationship between the two suggests that it may be possible to learn to recognize semantic relationships from syntactic cues given examples with both types of information411 phrase typedifferent semantic roles tend to be realized by different syntactic categoriesfor example in communication frames the speaker is likely to appear as a noun phrase topic as a prepositional phrase or noun phrase and medium as a prepositional phrase as in speaker we talked topic about the proposal medium over the phone the phrase type feature we used indicates the syntactic category of the phrase expressing the semantic roles using the set of syntactic categories of the penn treebank project as described in marcus santorini and marcinkiewicz in our data frame elements are most commonly expressed as noun phrases and prepositional phrases the next most common categories are adverbial phrases particles and clauses we used collins statistical parser trained on examples from the penn treebank to generate parses of the same format for the sentences in our dataphrase types were derived automatically from parse trees generated by the parser as shown in figure 2given the automatically generated parse tree the constituent spanning the same set of words as each annotated frame element was found and the constituents nonterminal label was taken as the phrase typein cases in which more than one constituent matches because of a unary production in the parse tree the higher constituent was chosena sample sentence with parser output and framenet annotation parse constituents corresponding to frame elements are highlightedthe matching was performed by calculating the starting and ending word positions for each constituent in the parse tree as well as for each annotated frame element and matching each frame element with the parse constituent with the same beginning and ending pointspunctuation was ignored in this computationbecause of parsing errors or less frequently mismatches between the parse tree formalism and the framenet annotation standards for 13 of the frame elements in the training set there was no parse constituent matching an annotated frame elementthe one case of systematic mismatch between the parse tree formalism and the framenet annotation standards is the framenet convention of including both a relative pronoun and its antecedent in frame elements as in the first frame element in the following sentence mismatch caused by the treatment of relative pronouns accounts for 1 of the frame elements in the training setduring testing the largest constituent beginning at the frame elements left boundary and lying entirely within the element was used to calculate the frame elements featureswe did not use this technique on the training set as we expected that it would add noise to the data but instead discarded examples with no matching parse constituentour technique for finding a near match handles common parse errors such as a prepositional phrase being incorrectly attached to a noun phrase at the righthand edge and it guarantees that some syntactic category will be returned the partofspeech tag of the frame elements first word in the limiting case alization as subject or direct object is one of the primary facts that linking theory attempts to explainit was a motivation for the case hierarchy of fillmore which allowed such rules as if there is an underlying agent it becomes the syntactic subject similarly in his theory of macroroles van valin describes the actor as being preferred in english for the subjectfunctional grammarians consider syntactic subjects historically to have been grammaticalized agent markersas an example of how such a feature can be useful in the sentence he drove the car over the cliff the subject np is more likely to fill the agent role than the other two npswe will discuss various grammaticalfunction features that attempt to indicate a constituents syntactic relation to the rest of the sentence for example as a subject or object of a verbthe first such feature which we call governing category or gov has only two values s and vp corresponding to subjects and objects of verbs respectivelythis feature is restricted to apply only to nps as it was found to have little effect on other phrase typesas with phrase type the feature was read from parse trees returned by the parserwe follow links from child to parent up the parse tree from the constituent corresponding to a frame element until either an s or vp node is found and assign the value of the feature according to whether this node is an s or a vpnp nodes found under s nodes are generally grammatical subjects and np nodes under vp nodes are generally objectsin most cases the s or vp node determining the value of this feature immediately dominates the np node but attachment errors by the parser or constructions such as conjunction of two nps can cause intermediate nodes to be introducedsearching for higher ancestor nodes makes the feature robust to such caseseven given good parses this feature is not perfect in discriminating grammatical functions and in particular it confuses direct objects with adjunct nps such as temporal phrasesfor example town in the sentence he left town and yesterday in the sentence he left yesterday will both be assigned a governing category of vpdirect and indirect objects both appear directly under the vp nodefor example in the sentence he gave me a new hose me and a new hose are both assigned a governing category of vpmore sophisticated handling of such cases could improve our system413 parse tree pathlike the governingcategory feature described above the parse tree path feature is designed to capture the syntactic relation of a constituent to the rest of the sentencethe path feature however describes the syntactic relation between the target word and the constituent in question whereas the gov feature is independent of where the target word appears in the sentence that is it identifies all subjects whether they are the subject of the target word or notthe path feature is defined as the path from the target word through the parse tree to the constituent in question represented as a string of parse tree nonterminals linked by symbols indicating upward or downward movement through the tree as shown in figure 3although the path is composed as a string of symbols our system treats the string as an atomic valuethe path includes as the first element of the string the part of speech of the target word and as the last element the phrase type or syntactic category of the sentence constituent marked as a frame elementafter some experimentation we settled on a version of the path feature that collapses the various partofspeech tags for verbs including pasttense verb thirdperson singular presenttense verb other presenttense verb and past participle into a single verb tag denoted vb our path feature is dependent on the syntactic representation used which in our case is the treebank2 annotation style as our parser is trained on this later version of the treebank datafigure 4 shows the annotation for the sentence they expect him to cut costs throughout the organization which exhibits in this example the path from the target word ate to the frame element he can be represented as vbtvptsnp with t indicating upward movement in the parse tree and downward movementthe np corresponding to he is found as described in section 411treebank annotation of raising constructions the syntactic phenomenon known as subjecttoobject raising in which the main verbs object is interpreted as the embedded verbs subjectthe treebank2 style tends to be generous in its usage of s nodes to indicate clauses a decision intended to make possible a relatively straightforward mapping from s nodes to predicationsin this example the path from cut to the frame element him would be vbtvptvptstnp which typically indicates a verbs subject despite the accusative case of the pronoun himfor the target word of expect in the sentence of figure 4 the path to him would be vbtvptstnp rather than the typical directobject path of vbtvptnpan example of treebank2 annotation of an equi construction in which a noun phrase serves as an argument of both the main and subordinate verbs is shown in figure 5here an empty category is used in the subject position of the subordinate clause and is coindexed with the np congress in the directobject position of the main clausethe empty category however is not used in the statistical model of the parser or shown in its output and is also not used by the framenet annotation which would mark the np congress as a frame element of raise in this examplethus the value of our path feature from the target word raise to the frame element congress would be vbtvptvptstvptnp and from the target word of persuaded the path to congress would be the standard directobject path vbtvptnpother changes in annotation style from the original treebank style were specifically intended to make predicate argument structure easy to read from the parse trees and include new empty constituents coindexing relations between nodes and secondary functional tags such as subject and temporalour parser output however does not include this additional information but rather simply gives trees of phrase type categoriesthe sentence in figure 4 is one example of how the change in annotation style of treebank2 can affect this level of representation the earlier style assigned the word him an np node directly under the vp of expectthe most common values of the path feature along with interpretations are shown in table 2for the purposes of choosing a frame element label for a constituent the path feature is similar to the gov feature defined abovebecause the path captures more information than the governing category it may be more susceptible to parser errors and data sparsenessas an indication of this our path feature takes on a total of 2978 possible values in the training data when frame elements with no matching example of target word renting in a small clause parse constituent are not counted and 4086 possible values when paths are found to the bestmatching constituent in these casesthe governingcategory feature on the other hand which is defined only for nps has only two values in cases in which the path feature includes an s or vp ancestor of an np node as part of the path to the target word the gov feature is a function of the path featurethis is the case most of the time including for our prototypical subject and object pathsof the 35138 frame elements identified as nps by the parser only 4 have a path feature that does not include a vp or s ancestorone such example is shown in figure 6 where the small clause the remainder renting has no s node giving a path feature from renting to the remainder of vbtvptnptnpthe value of the gov feature here is vp as the algorithm finds the vp of the sentences main clause as it follows parent links up the treethe feature is spurious in this case because the main vp is not headed by or relevant to the target word rentingsystems based on the path and gov features are compared in section 43the differences between the two are relatively small for the purpose of identifying semantic roles when frame element boundaries are knownthe path feature will however be important in identifying which constituents are frame elements for a given target word as it gives us a way of navigating through the parse tree to find the frame elements in the sentence414 positionto overcome errors due to incorrect parses as well as to see how much can be done without parse trees we introduced position as a featurethe position feature simply indicates whether the constituent to be labeled occurs before or after the predicate defining the semantic framewe expected this feature to be highly correlated with grammatical function since subjects will generally appear before a verb and objects afteralthough we do not have handchecked parses against which to measure the performance of the automatic parser on our corpus the result that 13 of frame elements have no matching parse constituent gives a rough idea of the parsers accuracyalmost all of these cases in which no matching parse constituent was found are due to parser errorother parser errors include cases in which a constituent is found but with the incorrect label or internal structurethis result also considers only the individual constituent representing the frame element the parse for the rest of the sentence may be incorrect resulting in an incorrect value for the grammatical function features described in the previous two sectionscollins reports 88 labeled precision and recall on individual parse constituents on data from the penn treebank roughly consistent with our finding of at least 13 error415 voicethe distinction between active and passive verbs plays an important role in the connection between semantic role and grammatical function since direct objects of active verbs often correspond in semantic role to subjects of passive verbsfrom the parser output verbs were classified as active or passive by building a set of 10 passiveidentifying patternseach of the patterns requires both a passive auxiliary and a past participleroughly 5 of the examples were identified as passive uses416 head wordas previously noted we expected lexical dependencies to be extremely important in labeling semantic roles as indicated by their importance in related tasks such as parsinghead words of noun phrases can be used to express selectional restrictions on the semantic types of role fillersfor example in a communication frame noun phrases headed by bill brother or he are more likely to be the speaker whereas those headed by proposal story or question are more likely to be the topicsince the parser we used assigns each constituent a head word as an integral part of the parsing model we were able to read the head words of the constituents from the parser output employing the same set of rules for identifying the head child of each constituent in the parse treethe rules for assigning a head word are listed in collins prepositions are considered to be the head words of prepositional phrasesthe rules for assigning head words do not attempt to distinguish between cases in which the preposition expresses the semantic content of a role filler such as path frame elements expressed by prepositional phrases headed by along through or in and cases in which the preposition might be considered to be purely a case marker as in most uses of of where the semantic content of the role filler is expressed by the prepositions objectcomplementizers are considered to be heads meaning that infinitive verb phrases are always headed by to and subordinate clauses such as in the sentence i am sure that he came are headed by thatfor our experiments we divided the framenet corpus as follows onetenth of the annotated sentences for each target word were reserved as a test set and another onetenth were set aside as a tuning set for developing our systema few target words where fewer than 10 examples had been chosen for annotation were removed from the corpusin our corpus the average number of sentences per target word is only 34 and the number of sentences per frame is 732 both relatively small amounts of data on which to train frame element classifiersto label the semantic role of a constituent automatically we wish to estimate a probability distribution indicating how likely the constituent is to fill each possible distributions calculated for semantic role identification r indicates semantic role pt phrase type gov grammatical function h head word and t target word or predicatedistribution role given the features described above and the predicate or target word t p where r indicates semantic role h head word and pt phrase typeit would be possible to calculate this distribution directly from the training data by counting the number of times each role appears with a combination of features and dividing by the total number of times the combination of features appears p in many cases however we will never have seen a particular combination of features in the training data and in others we will have seen the combination only a small number of times providing a poor estimate of the probabilitythe small number of training sentences for each target word and the large number of values that the head word feature in particular can take contribute to the sparsity of the dataalthough we expect our features to interact in various ways we cannot train directly on the full feature setfor this reason we built our classifier by combining probabilities from distributions conditioned on a variety of subsets of the featurestable 3 shows the probability distributions used in the final version of the systemcoverage indicates the percentage of the test data for which the conditioning event had been seen in training dataaccuracy is the proportion of covered test data for which the correct role is given the highest probability and performance which is the product of coverage and accuracy is the overall percentage of test data for which the correct role is predicted3 accuracy is somewhat similar to the familiar metric of precision in that it is calculated over cases for which a decision is made and performance is similar to recall in that it is calculated over all true frame elementsunlike in a traditional precisionrecall tradeoff however these results have no threshold to adjust and the task is a multiway classification rather than a binary decisionthe distributions calculated were simply the empirical distributions from the training datathat is occurrences of each role and each set of conditioning events were counted in a table and probabilities calculated by dividing the counts for each role by the total number sample probabilities for p calculated from training data for the verb abductthe variable gov is defined only for noun phrasesthe roles defined for the removing frame in the motion domain are agent theme cotheme and manner of observations for each conditioning eventfor example the distribution p was calculated as follows some sample probabilities calculated from the training are shown in table 4as can be seen from table 3 there is a tradeoff between morespecific distributions which have high accuracy but low coverage and lessspecific distributions which have low accuracy but high coveragethe lexical head word statistics in particular are valuable when data are available but are particularly sparse because of the large number of possible head wordsto combine the strengths of the various distributions we merged them in various ways to obtain an estimate of the full distribution pthe first combination method is linear interpolation which simply averages the probabilities given by each of the distributions where ei λi 1the geometric mean when expressed in the log domain is similar where z is a normalizing constant ensuring that er p 1results for systems based on linear interpolation are shown in the first row of table 5these results were obtained using equal values of λ for each distribution defined for the relevant conditioning event as a more sophisticated method of choosing interpolation weights the expectation maximization algorithm was used to estimate the likelihood of the observed roles being produced by each of the distributions in the general techniques of jelinek and mercer because a number of the distributions used may have no training data for a given set of variables the data were divided according to the set of distributions available and a separate set of interpolation weights was trained for each set of distributionsthis technique did not outperform equal weights even on the data used to determine the weightsalthough the them algorithm is guaranteed to increase the likelihood of the training data that likelihood does not always correspond to our scoring which is based only on whether the correct outcome is assigned the highest probabilityresults of the them interpolation on heldout test data are shown in table 6experimentation has shown that the weights used have relatively little impact in our interpolation scheme no doubt because the evaluation metric depends only on the ranking of the probabilities and not on their exact valueschanging the interpolation weights rarely changes the probabilities of the roles enough to change their rankingwhat matters most is whether a combination of variables has been seen in the training data or notresults for the geometric mean are shown in row 3 of table 5as with linear interpolation the exact weights were found to have little effect and the results shown reflect equal weightsan area we have not explored is the use of the maximumentropy techniques of for example pietra pietra and lafferty to set weights for the loglinear model either at the level of combining our probability distributions or at the level of calculating weights for individual values of the featuresin the backoff combination method a lattice was constructed over the distributions in table 3 from morespecific conditioning events to lessspecific as shown in figure 7the lattice is used to select a subset of the available distributions to combinethe lessspecific distributions were used only when no data were present for any morespecific distributionthus the distributions selected are arranged in a cut across the lattice representing the mostspecific distributions for which data are availablethe selected probabilities were combined with both linear interpolation and a geometric mean with results shown in table 5the final row of the table represents the baseline lattice organization of the distributions from table 3 with morespecific distributions toward the top of always selecting the most common role of the target word for all its constituents that is using only palthough this lattice is reminiscent of techniques of backing off to less specific distributions commonly used in ngram language modeling it differs in that we use the lattice only to select distributions for which the conditioning event has been seen in the training datadiscounting and deleted interpolation methods in language modeling typically are used to assign small nonzero probability to a predicted variable unseen in the training data even when a specific conditioning event has been seenin our case we are perfectly willing to assign zero probability to a specific role we are interested only in finding the role with the highest probability and a role given a small nonzero probability by smoothing techniques will still not be chosen as the classifiers outputthe lattice presented in figure 7 represents just one way of choosing subsets of features for our systemdesigning a feature lattice can be thought of as choosing a set of feature subsets once the probability distributions of the lattice have been chosen the graph structure of the lattice is determined by the subsumption relations among the sets of conditioning variablesgiven a set of n conditioning variables there are 2n possible subsets and 22n possible sets of subsets giving us a doubly exponential number of possible latticesthe particular lattice of figure 7 was chosen to represent some expected interaction between featuresfor example we expect position and voice to interact and they are always used togetherwe expect the head word h and the phrase type pt to be relatively independent predictors of the semantic role and therefore include them separately as roots of the backoff structurealthough we will not explore all the possibilities for our lattice some of the feature interactions are examined more closely in section 43the final system performed at 804 accuracy which can be compared to the 409 achieved by always choosing the most probable role for each target word essentially chance performance on this taskresults for this system on test data held out during development of the system are shown in table 6surprisingly the thembased interpolation performed better than the latticebased system on the heldout test set but not on the data used to set the weights in the thembased systemwe return to an analysis of which roles are hardest to classify in section 91three of our features position gov and path attempt to capture the syntactic relation between the target word and the constituent to be labeled and in particular to differentiate the subjects from objects of verbsto compare these three features directly experiments were performed using each feature alone in an otherwise identical syslattice structures for comparing grammaticalfunction features temresults are shown in table 7for the first set of experiments corresponding to the first column of table 7 no voice information was used with the result that the remaining distributions formed the lattice of figure 8a in the figure represents one of the features position gov and pathadding voice information back into the system independently of the grammaticalfunction feature results minimal lattice in the lattice of figure 8b corresponding to the second column of table 7choosing distributions such that the grammatical function and voice features are always used together results in figure 8c corresponding to the third column of table 7in each case as in previous results the grammatical function feature was used only when the candidate constituent was an npthe last row of table 7 shows results using no grammaticalfunction feature the distributions making use of gf are removed from the lattices of figure 8as a guideline for interpreting these results with 8167 observations the threshold for statistical significance with p 05 is a 10 absolute difference in performanceit is interesting to note that looking at a constituents position relative to the target word performed as well as either of our features that read grammatical function off the parse tree both with and without passive informationthe gov and path features seem roughly equivalent in performanceusing head word phrase type and target word without either position or grammatical function yielded only 763 accuracy indicating that although the two features accomplish a similar goal it is important to include some measure of the constituents relationship to the target word whether relative position or either of the syntactic featuresuse of the activepassive voice feature seems to be beneficial only when the feature is tied to grammatical function the second column in table 7 shows no improvement over the first while the righthand column where grammatical function and voice are tied shows gains of at least 05 in all casesas before our three indicators of grammatical function seem roughly equivalent with the best result in this case being the gov featurethe lattice of figure 8c performs as well as our system of figure 7 indicating that including both position and either of the syntactic relations is redundantas an experiment to see how much can be accomplished with as simple a system as possible we constructed the minimal lattice of figure 9 which includes just two distributions along with a prior for the target word to be used as a last resort when no data are availablethis structure assumes that head word and grammatical function are independentit further makes no use of the voice featurewe chose the path feature as the representation of grammatical function in this casethis system classified 763 of frame elements correctly indicating that one can obtain roughly ninetenths the performance of the full system with a simple approachin this section we examine the systems performance on the task of locating the frame elements in a sentencealthough our probability model considers the question of finding the boundaries of frame elements separately from the question of finding the correct label for a particular frame element similar features are used to calculate both probabilitiesin the experiments below the system is no longer given frame element boundaries but is still given as inputs the humanannotated target word and the frame to which it belongswe do not address the task of identifying which frames come into play in a sentence but envision that existing word sense disambiguation techniques could be applied to the taskas before features are extracted from the sentence and its parse and are used to calculate probability tables with the predicted variable in this case being fe a binary indicator of whether a given constituent in the parse tree is or is not a frame elementthe features used were the path feature of section 413 the identity of the target word and the identity of the constituents head wordthe probability distributions calculated from the training data were p p and p where fe indicates an event where the parse constituent in question is a frame element path the path through the parse tree from the target word to the parse constituent t the identity of the target word and h the head word of the parse constituentsome sample values from these distributions are shown in table 8for example the path vbtvptnp which corresponds to the direct object of a verbal target word had a high probability of being a frame elementthe table also illustrates cases of sparse data for various feature combinationsby varying the probability threshold at which a decision is made one can plot a precisionrecall curve as shown in figure 10p performs relatively poorly because of fragmentation of the training data although the lexical statistic p alone is not useful as a classifier using it in linear interpolation with the path statistics improves resultsthe curve labeled interpolation in figure 10 reflects a linear interpolation of the form note that this method can identify only those frame elements that have a corresponding constituent in the automatically generated parse treefor this reason it is interesting to calculate how many true frame elements overlap with the results of the system relaxing the criterion that the boundaries must match exactlyresults for partial matching are shown in table 9three types of overlap are possible the identified constituent entirely within the true frame element the true frame element entirely within the identified constituent and each sequence partially contained by the otheran example of the first case is shown in figure 11 where the true message frame element is mandarin by a head but because of an error in the parser output no constituent exactly matches the frame elements boundariesin this case the system identifies plot of precisionrecall curve for various methods of identifying frame elementsrecall is calculated over only frame elements with matching parse constituentsexactly matching boundaries 66 5421 identified constituent entirely within true frame element 8 663 true frame element entirely within identified constituent 7 599 both partially within the other 0 26 no overlap with any true frame element 13 972 two frame elements indicated by shading which together span the true frame elementwhen the automatically identified constituents were fed through the rolelabeling system described above 796 of the constituents that had been correctly identified in the first stage were assigned the correct role in the second roughly equivalent to the performance when roles were assigned to constituents identified by handa more sophisticated integrated system for identifying and labeling frame elements is described in section 71as can be seen from table 3 information about the head word of a constituent is valuable in predicting the constituents roleof all the distributions presented an example of overlap between identified frame elements and the true boundaries caused by parser errorin this case two frame elements identified by the classifier are entirely within the human annotation contributing two instances to row 2 of table 9p predicts the correct role most often when training data for a particular head word have been seenbecause of the large vocabulary of possible head words however it also has the smallest coverage meaning that it is likely that for a given case in the test data no frame element with the same head word will have been seen in the set of training sentences for the target word in questionto capitalize on the information provided by the head word we wish to find a way to generalize from head words seen in the training data to other head wordsin this section we compare three different approaches to the task of generalizing over head words automatic clustering of a large vocabulary of head words to identify words with similar semantics use of a handbuilt ontological resource wordnet to organize head words in a semantic hierarchy and bootstrapping to make use of unlabeled data in training the systemwe will focus on frame elements filled by noun phrases which constitute roughly half the totalto find groups of head words that are likely to fill the same semantic roles an automatic clustering of nouns was performed using word cooccurrence data from a large corpusthis technique is based on the expectation that words with similar semantics will tend to cooccur with the same other sets of wordsfor example nouns describing foods will tend to occur as direct objects of verbs such as eat devour and savorthe clustering algorithm attempts to find such patterns of cooccurrence from the counts of grammatical relations between pairs of specific words in the corpus without the use of any external knowledge or semantic representationwe extracted verbdirect object relations from an automatically parsed version of the british national corpus using the parser of carroll and rooth 4 clustering was performed using the probabilistic model of cooccurrence described in detail by hofmann and puzicha p and pdeterministic annealing was used to prevent overfitting of the training datawe are interested only in the clusters of nouns given by the distribution p the verbs and the distribution p are thrown away once training is completeother grammatical relations besides direct object could be used as could a set of relationswe used the direct object because it is particularly likely to exhibit semantically significant selectional restrictionsa total of 2610946 verbobject pairs were used as training data for the clustering with a further 290105 pairs used as a crossvalidation set to control the parameters of the clustering algorithmdirect objects were identified as noun phrases directly under a verb phrase nodenot a perfect technique since it also finds nominal adjuncts such as i start today forms of the verb to be were excluded from the data as its cooccurrence patterns are not semantically informativethe number of values possible for the latent cluster variable was set to 256the soft clustering of nouns thus generated is used as follows for each example in the frame elementannotated training data probabilities for values of the hidden cluster variable were calculated using bayes rule the clustering was applied only to noun phrase constituents the distribution p from the clustering is used as a distribution p over noun head wordsusing the cluster probabilities a new estimate of p is calculated for cases where pt the phrase type or syntactic category of the constituent is np where j is an index ranging over the frame elements in the training set and their associated features pt t h and their semantic roles r during testing a smoothed estimate of the head wordbased role probability is calculated by marginalizing over cluster values as with the other methods of generalization described in this section automatic clustering was applied only to noun phrases which represent 50 of the constituents in the test datawe would not expect head word to be as valuable for other phrase typesthe second most common category is prepositional phrasesthe head of a prepositional phrase is considered to be the preposition according to the rules we use and because the set of prepositions is small coverage is not as great a problemfurthermore the preposition is often a direct indicator of the semantic rolephrase types other than np and pp make up only a small proportion of the datatable 10 shows results for the use of automatic clustering on constituents identified by the parser as noun phrasesas can be seen in the table the vocabulary used for clustering includes almost all of the test data and the decrease in accuracy from direct lexical statistics to clustered statistics is relatively small when combined with the full system described above clustered statistics increase performance on np constituents from 834 to 850 over the entire test set this translates into an improvement from 804 to 812the automatic clustering described above can be seen as an imperfect method of deriving semantic classes from the vocabulary and we might expect a handdeveloped set of classes to do betterwe tested this hypothesis using wordnet a freely available semantic hierarchythe basic technique when presented with a head word for which no training examples had been seen was to ascend the type hierarchy until reaching a level for which training data are availableto do this counts of training data were percolated up the semantic hierarchy in a technique similar to that of for example mccarthy for each training example the count was incremented in a table indexed by the semantic role r wordnet sense s phrase type pt and target word t for each wordnet sense s above the head word h in the hypernym hierarchyin fact the wordnet hierarchy is not a tree but rather includes multiple inheritancefor example person has as hypernyms both life form and causal agentin such cases we simply took the first hypernym listed effectively converting the structure into a treea further complication is that several wordnet senses are possible for a given head wordwe simply used the first sense listed for each word a word sense disambiguation module capable of distinguishing wordnet senses might improve our resultsas with the clustering experiments reported above the wordnet hierarchy was used only for noun phrasesthe wordnet hierarchy does not include pronouns to increase coverage the personal pronouns i me you he she him her we and us were added as hyponyms of personpronouns that refer to inanimate or both animate and inanimate objects were not includedin addition the celex english lexical database was used to convert plural nouns to their singular formsas shown in table 11 accuracy for the wordnet technique is roughly the same as that in the automatic clustering results in table 10 843 on nps as opposed to 850 with automatic clusteringthis indicates that the error introduced by the unsupervised clustering is roughly equivalent to the error caused by our arbitrary choice of the first wordnet sense for each word and the first hypernym for each wordnet sensecoverage for the wordnet technique is lower however largely because of the absence of proper nouns from wordnet as well as the absence of nonanimate pronouns a dictionary of proper nouns would likely help improve coverage and a module for anaphora resolution might help cases with pronouns with or without the use of wordnetthe conversion of plural forms to singular base forms was an important part of the success of the wordnet system increasing coverage from 710 to 808of the remaining 192 of all noun phrases not covered by the combination of lexical and wordnet sense statistics 22 consisted of head words defined in wordnet but for which no training data were available for any hypernym and 78 consisted of head words not defined in wordneta third way of attempting to improve coverage of the lexical statistics is to bootstrap or label unannotated data with the automatic system described in sections 4 and 5 and use the result as further training datathis can be considered a variant of the them algorithm although we use the single most likely hypothesis for the unannotated data rather than calculating the expectation over all hypothesesonly one iteration of training on the unannotated data was performedthe unannotated data used consisted of 156590 sentences containing the target words of our corpus increasing the total amount of data available to roughly six times the 36995 annotated training sentencestable 12 shows results on noun phrases for the bootstrapping methodthe accuracy of a system trained only on data from the automatic labeling is 810 reasonably close to the 870 for the system trained only on annotated data combining the annotated and automatically labeled data increases coverage from 416 to 547 and performance to 445because the automatically labeled data are not as accurate as the annotated data we can do slightly better by using the automatic data only in cases where no training data are available backing off to the distribution panto from ptraithe fourth row of table 12 shows results with panto incorporated into the backoff lattice of all the features of figure 7 which actually resulted in a slight decrease in performance from the system without the bootstrapped data shown in the third rowthis is presumably because although the system trained on automatically labeled data performed with reasonable accuracy many of the cases it classifies correctly overlap with the training datain fact our backingoff estimate of p classifies correctly only 66 of the additional cases that it covers over ptrainthe three methods of generalizing lexical statistics each had roughly equivalent accuracy on cases for which they were able to derive an estimate of the role probabilities for unseen head wordsthe differences between the three were primarily due to how much they could improve the coverage of the estimator that is how many new noun heads they were able to handlethe automaticclustering method performed by far the best on this metric only 21 of test cases were unseen in the data used for the automatic clusteringthis indicates how much can be achieved with unsupervised methods given very large training corporathe bootstrapping technique described here although it has a similar unsupervised flavor made use of much less data than the corpus used for noun clusteringunlike probabilistic clustering the bootstrapping technique can make use of only those sentences containing the target words in questionthe wordnet experiment on the other hand indicates both the usefulness of handbuilt resources when they apply and the difficulty of attaining broad coverage with such resourcescombining the three systems described would indicate whether their gains are complementary or overlappingone of the primary difficulties in labeling semantic roles is that one predicate may be used with different argument structures for example in the sentences he opened the door and the door opened the verb open assigns different semantic roles to its syntactic subjectin this section we compare two strategies for handling this type of alternation in our system a sentencelevel feature for frame element groups and a subcategorization feature for the syntactic uses of verbsthen a simple system using the predicates argument structure or syntactic signature as the primary feature will be contrasted with previous systems based on local independent featuresthe system described in previous sections for classifying frame elements makes an important simplifying assumption it classifies each frame element independent of the decisions made for the other frame elements in the sentencein this section we remove this assumption and present a system that can make use of the information that for example a given target word requires that one role always be present or that having two instances of the same role is extremely unlikelyto capture this information we introduce the notion of a frame element group which is the set of frame element roles present in a particular sentence frame element groups are unordered examples are shown in table 13sample probabilities from the training data for the frame element groups of the target word blame are shown in table 14the framenet corpus recognizes three types of nullinstantiated frame elements which are implied but do not appear in the sentencean example of null instantiation is the sentence have you eaten where food is understoodwe did not attempt to identify such null elements and any nullinstantiated roles are not included in the sentences fegthis increases the variability of observed fegs as a predicate may require a certain role but allow it to be null instantiatedour system for choosing the most likely overall assignment of roles for all the frame elements of a sentence uses an approximation that we derive beginning with the true probability of the optimal role assignment r where p represents the probability of an overall assignment of roles ri to each of the n constituents of a sentence given the target word t and the various features fi of each of the constituentsin the first step we apply bayes rule to this and in the second we make the assumption that the features of the various constituents of a sentence are independent given the target word and each constituents role and discard the term p which is constant with respect to r we estimate the prior over frame element assignments as the probability of the frame element groups represented with the set operator and finally discard the feature prior p as being constant over the argmax expression this leaves us with an expression in terms of the prior for frame element groups of a particular target word p the local probability of a frame element given a constituents features p on which our previous system was based and the individual priors for the frame elements chosen pthis formulation can be used to assign roles either when the frame element boundaries are known or when they are not as we will discuss later in this sectioncalculating empirical feg priors from the training data is relatively straightforward but the sparseness of the data presents a problemin fact 15 of the test sentences had an feg not seen in the training data for the target word in questionusing the empirical value for the feg prior these sentences could never be correctly classifiedfor this reason we introduce a smoothed estimate of the feg prior consisting of a linear interpolation of the empirical feg prior and the product for each possible frame element of the probability of being present or not present in a sentence given the target word the value of a was empirically set to maximize performance on the development set a value of 06 yielded performance of 816 a significant improvement over the 804 of the baseline systemresults were relatively insensitive to the exact value of aup to this point we have considered separately the problems of labeling roles given that we know where the boundaries of the frame elements lie and finding the constituents to label in the sentence we now turn to combining the two systems described above into a complete role labeling systemwe use equation repeated below to estimate the probability that a constituent is a frame element where p is the path through the parse tree from the target word to the constituent t is the target word and h is the constituents head wordthe first two rows of table 15 show the results when constituents are determined to be frame elements by setting the threshold on the probability p to 05 and then running the labeling system of section 4 on the resulting set of constituentsthe first two columns of results show precision and recall for the task of identifying frame element boundaries correctlythe second pair of columns gives precision and recall for the combined task of boundary identification and role labeling to be counted as correct the frame element must both have the correct boundary and be labeled with the correct rolecontrary to our results using humanannotated boundaries incorporating feg priors into the system based on automatically identified boundaries had a negative effect on labeled precision and recallno doubt this is due to introducing a dependency on other frame element decisions that may be incorrect the use of feg priors causes errors in boundary identification to be compoundedone way around this problem is to integrate boundary identification with role labeling allowing the feg priors and the rolelabeling decisions to affect which constituents are frame elementsthis was accomplished by extending the formulation where fei is a binary variable indicating that a constituent is a frame element and p is calculated as abovewhen fei is true role probabilities are calculated as before when fei is false ri assumes an empty role with probability one and is not included in the feg represented by fr1njone caveat in using this integrated approach is its exponential complexity each combination of role assignments to constituents is considered and the number of combinations is exponential in the number of constituentsalthough this did not pose a problem when only the annotated frame elements were under consideration now we two subcategorizations for the target word openthe relevant production in the parse tree is highlightedon the left the value of the feature is vp vb np on the right it is vp vb must include every parse constituent with a nonzero probability for pto make the computation tractable we implement a pruning scheme hypotheses are extended by choosing assignments for one constituent at a time and only the top m hypotheses are retained for extension by assignments to the next constituenthere we set m 10 after experimentation showed that increasing m yielded no significant improvementresults for the integrated approach are shown in the last row of table 15allowing role assignments to influence boundary identification improves results both on the unlabeled boundary identification task and on the combined identification and labeling taskthe integrated approach puts us in a different portion of the precisionrecall curve from the results in the first two rows as it returns a higher number of frame elements a more direct comparison can be made by lowering the probability threshold for frame element identification from 05 to 035 to force the nonintegrated system to return the same number of frame elements as the integrated systemthis yields a frame element identification precision of 713 and recall of 676 and a labeled precision of 608 and recall of 576 which is dominated by the result for the integrated systemthe integrated system does not have a probability threshold to set nonetheless it comes closer to identifying the correct number of frame elements than does the independent boundary identifier when the theoretically optimal threshold of 05 is used with the latterrecall that use of the feg prior was motivated by the tendency of verbs to assign differing roles to the same syntactic positionfor example the verb open assigns different roles to the syntactic subject in he opened the door and the door openedin this section we consider a different feature motivated by these problems the syntactic subcategorization of the verbfor example the verb open seems to be more likely to assign the role patient to its subject in an intransitive context and agent to its subject in a transitive contextour use of a subcategorization feature was intended to differentiate between transitive and intransitive uses of a verbthe feature used was the identity of the phrase structure rule expanding the target words parent node in the parse tree as shown in figure 12for example for he closed the door with close as the target word the subcategorization feature would be vp vb np the subcategorization feature was used only when the target word was a verbthe various partofspeech tags for verb forms were collapsed into a single tag vbit is important to note that we are not able to distinguish complements from adjuncts and our subcategorization feature could be sabotaged by cases such as the door closed yesterdayin the penn treebank style yesterday is considered an np with tree structure equivalent to that of a direct objectour subcategorization feature is fairly specific for example the addition of an advp to a verb phrase will result in a different valuewe tested variations of the feature that counted the number of nps in a vp or the total number of children of the vp with no significant change in resultsthe subcategorization feature was used in conjunction with the path feature which represents the sequence of nonterminals along the path through the parse tree from the target word to the constituent representing a frame elementmaking use of the new subcategorization feature by adding the distribution p to the lattice of distributions in the baseline system resulted in a slight improvement to 808 performance from 804as with the gov feature in the baseline system it was found beneficial to use the subcat feature only for np constituentscombining the feg priors and subcategorization feature into a single system resulted in performance of 816 no improvement over using feg priors without subcategorizationwe suspect that the two seemingly different approaches in fact provide similar informationfor example in our hypothetical example of the sentence he opened the door vs the sentence the door opened the verb open would have high priors for the fegs agent theme and theme but a low prior for agentin sentences with only one candidate frame element the use of the feg prior will cause it to be labeled theme even when the feature probabilities prefer labeling a subject as agentthus the feg prior by representing the set of arguments the predicate is likely to take essentially already performs the function of the subcategorization featurethe feg prior allows us to introduce a dependency between the classifications of the sentences various constituents with a single parameterthus it can handle the alternation of our example without for example introducing the role chosen for one constituent as an additional feature in the probability distribution for the next constituents roleit appears that because introducing additional features can further fragment our already sparse data it is preferable to have a single parameter for the feg prioran interesting result reinforcing this conclusion is that some of the argumentstructure features that aided the system when individual frame elements were considered independently are unnecessary when using feg priorsremoving the features passive and position from the system and using a smaller lattice of only the distributions not employing these features yields an improved performance of 828 on the rolelabeling task using handannotated boundarieswe believe that because these features pertain to syntactic alternations in how arguments are realized they overlap with the function of the feg prioradding unnecessary features to the system can reduce performance by fragmenting the training datain the experiments reported in previous sections we have used the parse tree returned by a statistical parser as input to the rolelabeling systemin this section we explore the interaction between semantic roles and syntactic parsing by integrating the parser with the semanticrole probability modelthis allows the semanticrole assignment to affect the syntactic attachment decisions made by the parser with the hope of improving the accuracy of the complete systemalthough most statistical parsing work measures performance in terms of syntactic trees without semantic information an assignment of role fillers has been incorporated into a statistical parsing model by miller et al for the domainspecific templates of the message understanding conference taska key finding of miller et als work was that a system developed by annotating role fillers in text and training a statistical system performed at the same level as one based on writing a large system of rules which requires much more highly skilled labor to designwe use as the baseline of all our parsing experiments the model described in collins the algorithm is a form of chart parsing which uses dynamic programming to search through the exponential number of possible parses by considering subtrees for each subsequence of the sentence independentlyto apply chart parsing to a probabilistic grammar independence relations must be assumed to hold between the probabilities of a parse tree and the internal structure of its subtreesin the case of stochastic contextfree grammar the probability of a tree is independent of the internal structure of its subtrees given the topmost nonterminal of the subtreethe chartparsing algorithm can simply find the highestprobability parse for each nonterminal for each substring of the input sentenceno lowerprobability subtrees will ever be used in a complete parse and they can be thrown awayrecent lexicalized stochastic parsers such as collins charniak and others add additional features to each constituent the most important being the head word of the parse constituentthe statistical system for assigning semantic roles described in the previous sections does not fit easily into the chartparsing framework as it relies on longdistance dependencies between the target word and its frame elementsin particular the path feature which is used to navigate through the sentence from the target word to its likely frame elements may be an arbitrarily long sequence of syntactic constituentsa path feature looking for frame elements for a target word in another part of the sentence may examine the internal structure of a constituent violating the independence assumptions of the chart parserthe use of priors over fegs further complicates matters by introducing sentencelevel features dependent on the entire parsefor these reasons we use the syntactic parsing model without frame element probabilities to generate a number of candidate parses compute the best frame element assignment for each and then choose the analysis with the highest overall probabilitythe frame element assignments are computed as in section 71 with frame element probabilities being applied to every constituent in the parseto return a large number of candidate parses the parser was modified to include constituents in the chart even when they were equivalent according to the parsing model to a higherprobability constituentrather than choosing a fixed n and keeping the n best constituents for each entry in the chart we chose a probability threshold and kept all constituents within a margin of the highestprobability constituentthus the mechanism is similar to the beam search used to prune nonequivalent edges but a lower threshold was used for equivalent edges using these pruning parameters an average of 149 parses per sentence were obtainedafter rescoring with frame element probabilities 18 of the sentences were assigned a parse different from the original best parsenevertheless the impact on identification of frame elements was small results are shown in table 16the results show a slight but not statistically significant increase in recall of frame elementsone possible reason that the improvement is not greater is the relatively small number of parses per sentence available for rescoringunfortunately the parsing algorithm used to generate nbest parses is inefficient and generating large numbers of parses seems to be computationally intractablein theory the complexity of nbest variations of the viterbi chartparsing algorithm is quadratic in n one can simply expand the dynamic programming chart to have n slots for the best solutions to each subproblem rather than oneas our grammar forms new constituents from pairs of smaller constituents for each pair of constituents considered in a singlebest parser up to n2 pairs would be present in the nbest variantthe beam search used by modern parsers however makes the analysis more complexlexicalization of parse constituents dramatically increases the number of categories that must be stored in the chart and efficient parsing requires that constituents below a particular probability threshold be dropped from further considerationin practice returning a larger number of parses with our algorithm seems to require increasing the pruning beam size to a degree that makes run times prohibitivein addition to the robustness of even relatively simple parsing models one explanation for the modest improvement may be the fact that even our integrated system includes semantic information for only one word in the sentenceas the coverage of our frame descriptions increases it may be possible to do better and to model the interactions between the frames invoked by a textmost of the statistics used in the system as described above are conditioned on the target word or predicate for which semantic roles are being identifiedthis limits the applicability of the system to words for which training data are availablein section 6 we attempted to generalize across fillers for the roles of a single predicatein this section we turn to the related but somewhat more difficult question of generalizing from seen to unseen predicatesmany ways of attempting this generalization are possible but the simplest is provided by the framesemantic information of the framenet databasewe can use data from target words in the same frame to predict behavior for an unseen word or if no data are available for the frame in question we can use data from the same broad semantic domain into which the frames are groupedto investigate the degree to which our system is dependent on the set of semantic roles used we performed experiments using abstract general semantic roles such as agent patient and goalsuch roles were proposed in theories of linking such as fillmore and jackendoff to explain the syntactic realization of semantic argumentsthis level of roles often called thematic roles was seen as useful for expressing generalizations such as if a sentence has an agent the agent will occupy the subject position such correlations might enable a statistical system to generalize from one semantic domain to anotherrecent work on linguistic theories of linking has attempted to explain syntactic realization in terms of the fundamentals of verbs meaning although such an explanation is desirable our goal is more modest an automatic procedure for identifying semantic roles in textwe aim to use abstract roles as a means of generalizing from limited training data in various semantic domainswe see this effort as consistent with various theoretical accounts of the underlying mechanisms of argument linking since the various theories all postulate some sort of generalization between the roles of specific predicatesto this end we developed a correspondence from framespecific roles to a set of abstract thematic rolesfor each frame an abstract thematic role was assigned to each frame element in the frames definitionsince there is no canonical set of abstract semantic roles we decided upon the list shown in table 17we are interested in adjuncts as well as arguments leading to roles such as degree not found in many theories of verbargument linkingthe difficulty of fitting many relations into standard categories such as agent and patient led us to include other roles such as topicin all we used 18 roles a somewhat richer set than is often used but still much more restricted than the framespecific roleseven with this enriched set not all framespecific roles fit neatly into one categoryan experiment was performed replacing each role tag in the training and test data with the corresponding thematic role and training the system as described above on the new datasetresults were roughly comparable for the two types of semantic roles overall performance was 821 for thematic roles compared to 804 for framespecific rolesthis reflects the fact that most frames had a onetoone mapping from framespecific to abstract roles so the tasks were largely equivalentwe expect abstract roles to be most useful when one is generalizing to predicates and frames not found in the training data the topic of the following sectionsone interesting consequence of using abstract roles is that they allow us to compare more easily the systems performance on different roles because of the smaller number of categoriesthis breakdown is shown in table 18results are given for two systems the first assumes that the frame element boundaries are known and the second finds them automaticallythe second system which is described in section 71 corresponds to the rightmost two columns in table 18the labeled recall column shows how often the frame element is correctly identified whereas the unlabeled recall column shows how often a constituent with the given role is correctly identified as being a frame element even if it is labeled with the wrong roleexperiencer and agent two similar roles generally found as the subject for complementary sets of verbs are the roles that are correctly identified the most oftenthe unlabeled recall column shows that these roles are easy to find in the sentence as a predicates subject is almost always a frame element and the known boundaries column shows that they are also not often confused with other roles when it is known that they are frame elementsthe two most difficult roles in terms of unlabeled recall manner and degree are typically realized by adverbs or prepositional phrases and considered adjunctsit is interesting to note that these are considered in framenet to be general frame elements that can be used in any framestate rex spied out sam maggott hollering at all and sundry and making good use of his oversized red gingham handkerchieftopic he said we would urge people to be aware and be alert with fireworks because your fun might be someone elses tragedy this section has shown that our system can use roles defined at a more abstract level than the corpuss framelevel roles and in fact that when we are looking at a single predicate the choice has little effectin the following sections we attempt to use the abstract roles to generalize the behavior of semantically related predicateswe will present results at different successively broader levels of generalization making use of the categorization of framenet predicates into frames and more general semantic domainswe first turn to using data from the appropriate frame when no data for the target word are availabletable 19 shows results for various probability distributions using a division of training and test data constructed such that no target words are in commonevery tenth target word was included in the test setthe amount of training data available for each frame varied from just one target word in some cases to 167 target words in the perceptionnoise framethe training set contained a total of 75919 frame elements and the test set 7801 frame elementsperformance broken down by abstract rolethe third column represents accuracy when frame element boundaries are given to the system and the fourth and fifth columns reflect finding the boundaries automaticallyunlabeled recall includes cases that were identified as a frame element but given the wrong rolethe results show a familiar tradeoff between coverage and accuracyconditioning both the head word and path features on the frame reduces coverage but improves accuracya linear interpolation λ1p λ2p λ3p achieved 794 performance on the test set significantly better than any of the individual distributions and approaching the result of 821 for the original system using targetspecific statistics and thematic rolesthis result indicates that predicates in the same frame behave similarly in terms of their argument structure a finding generally consistent with theories of linking that claim that the syntactic realization of verb arguments can be predicted from their semanticswe would expect verbs in the same frame to be semantically similar and to have the same patterns of argument structurethe relatively high performance of framelevel statistics indicates that the minimal lattice for crossframe generalization frames defined by framenet are finegrained enough to capture the relevant semantic similaritiesthis result is encouraging in that it indicates that a relatively small amount of data can be annotated for a few words in a semantic frame and used to train a system that can then bootstrap to a larger number of predicatesmore difficult than the question of unseen predicates in a known frame are frames for which no training data are presentthe 67 frames in the current data set cover only a fraction of the english language and the high cost of annotation makes it difficult to expand the data set to cover all semantic domainsthe framenet project is defining additional frames and annotating data to expand the scope of the databasethe question of how many frames exist however remains unanswered for the time being a full account of frame semantics is expected to include multiple frames being invoked by many words as well as an inheritance hierarchy of frames and a more detailed representation of each frames meaningin this section we examine the framenet data by holding out an entire frame for testing and using other frames from the same general semantic domain for trainingrecall from figure 1 that domains like communication include frames like conversation questioning and statementbecause of the variation in difficulty between different frames and the dependence of the results on which frames are held out for testing we used a jackknifing methodologyeach frame was used in turn as test data with all other frames used as training datathe results in table 20 show average results over the entire data setcombining the distributions gives a system based on the backoff lattice of figure 13this system achieves performance of 510 compared to 821 for the original system and 794 for the withinframe generalization taskthe results show that generalizing across frames even within a domain is more difficult than generalizing across target words within a framethere are several factors that may account for this the framenet domains were intended primarily as a way of organizing the project and their semantics have not been formalizedthus it may not be surprising that they do not correspond to significant generalizations about argument structurethe domains are fairly broad as indicated by the fact that always choosing the most common role for a given domain in table 20 classifies 284 of frame elements correctly does not do better than the crossdomain baseline of always choosing the most common role from the entire database regardless of domain in table 20 which yields 287 correctthis contrasts with a 409 baseline for p that is always choosing the most common role for a particular target word domain information does not seem to help a great deal given no information about the framefurthermore the crossframe experiments here are dependent on the mapping of framelevel roles to abstract thematic rolesthis mapping was done at the frame level that is framenet roles with the same label in two different frames may be translated into two different thematic roles but all target words in the same frame make use of the same mappingthe mapping of roles within a frame is generally one to one and therefore the choice of mapping has little effect when using statistics conditioned on the target word and on the frame as in the previous sectionwhen we are attempting to generalize between frames the mapping determines which roles from the training frame are used to calculate probabilities for the roles in the test frames and the choice of mapping is much more significantthe mapping used is necessarily somewhat arbitraryit is interesting to note that the path feature performs better when not conditioned on the domainthe head word however seems to be more domainspecific although coverage declines when the context is restricted to the semantic domain accuracy improvesthis seems to indicate that the identity of certain role fillers is domainspecific but that the syntaxsemantics correspondence captured by the path feature is more general as predicted by theories of syntactic linkingas general as they are the semantic domains of the current framenet database cover only a small portion of the languagethe domains are defined at the level of for example communication and emotion a list of the 12 domains in our corpus is given in table 1whether generalization is possible across domains is an important question for a general languageunderstanding systemfor these experiments a jackknifing protocol similar to that of the previous section was used this time holding out one entire domain at a time and using all the others as training materialresults for the path and head word feature are shown in table 21the distributions p p and p of table 21 also appeared in table 20 the difference between the experiments is only in the division of training and test setsa linear interpolation λ1p λ2p classifies 398 of frame elements correctlythis is no better than our result of 409 for always choosing a predicates most frequent role however the crossdomain system does not have role frequencies for the test predicatesas one might expect as we make successively broader generalizations to semantically more distant predicates performance degradesour results indicate that frame semantics give us a level at which generalizations relevant to argument linking can be madeour results for unseen predicates within the same frame are encouraging indicating that the predicates are semantically similar in ways that result in similar argument structure as the semantically based theories of linking advocated by levin and levin and rappaport hovav would predictwe hope that corpusbased systems such as ours can provide a way of testing and elaborating such theories in the futurewe believe that some level of skeletal representation of the relevant aspects of a words meaning along the lines of kipper et al and of the frame hierarchy being developed by the framenet project could be used in the future to help a statistical system generalize from similar words for which training data are availableour system is able to label semantic roles automatically with fairly high accuracy indicating promise for applications in various natural language taskssemantic roles do not seem to be simple functions of a sentences syntactic tree structure and lexical statistics were found to be extremely valuable as has been the case in other natural language processing applicationsalthough lexical statistics are quite accurate on the data covered by observations in the training set the sparsity of their coverage led us to introduce semantically motivated knowledge sources which in turn allowed us to compare automatically derived and handbuilt semantic resourcesvarious methods of extending the coverage of lexical statistics indicated that the broader coverage of automatic clustering outweighed its imprecisioncarefully choosing sentencelevel features for representing alternations in verb argument structure allowed us to introduce dependencies between frame element decisions within a sentence without adding too much complexity to the systemintegrating semantic interpretation and syntactic parsing yielded only the slightest gain showing that although probabilistic models allow easy integration of modules the gain over an unintegrated system may not be large because of the robustness of even simple probabilistic systemsmany aspects of our system are still quite preliminaryfor example our system currently assumes knowledge of the correct frame type for the target word to determine the semantic roles of its argumentsa more complete semantic analysis system would thus require a module for frame disambiguationit is not clear how difficult this problem is and how much it overlaps with the general problem of wordsense disambiguationmuch else remains to be done to apply the system described here to the interpretation of general textone technique for dealing with the sparseness of lexical statistics would be the combination of framenet data with namedentity systems for recognizing times dates and locations the effort that has gone into recognizing these items typically used as adjuncts should complement the framenet data which is more focused on argumentsgeneralization to predicates for which no annotated data are available may be possible using other lexical resources or automatic clustering of predicatesautomatically learning generalizations about the semantics and syntactic behavior of predicates is an exciting problem for the years to comepenn treebank constituent labelswe are grateful to chuck fillmore andreas stolcke jerry feldman and three anonymous reviewers for their comments and suggestions to collin baker for his assistance with the framenet data and to mats rooth and sabine schulte i am walde for making available their parsed corpusthis work was primarily funded by national science foundation grant itrhci 0086132 to the framenet project
J02-3001
automatic labeling of semantic roleswe present a system for identifying the semantic relationships or semantic roles filled by constituents of a sentence within a semantic framegiven an input sentence and a target word and frame the system labels constituents with either abstract semantic roles such as agent or patient or more domainspecific semantic roles such as speaker message and topicthe system is based on statistical classifiers trained on roughly 50000 sentences that were handannotated with semantic roles by the framenet semantic labeling projectwe then parsed each training sentence into a syntactic tree and extracted various lexical and syntactic features including the phrase type of each constituent its grammatical function and its position in the sentencethese features were combined with knowledge of the predicate verb noun or adjective as well as information such as the prior probabilities of various combinations of semantic roleswe used various lexical clustering algorithms to generalize across possible fillers of rolestest sentences were parsed were annotated with these features and were then passed through the classifiersour system achieves 82 accuracy in identifying the semantic role of presegmented constituentsat the more difficult task of simultaneously segmenting constituents and identifying their semantic role the system achieved 65 precision and 61 recallour study also allowed us to compare the usefulness of different features and feature combination methods in the semantic role labeling taskwe also explore the integration of role labeling with statistical syntactic parsing and attempt to generalize to predicates unseen in the training datawe propose the first srl model on framenet
summarizing scientific articles experiments with relevance and rhetorical status edinburgh in this article we propose a strategy for the summarization of scientific articles that concentrates on the rhetorical status of statements in an article material for summaries is selected in such a way that summaries can highlight the new contribution of the source article and situate it with respect to earlier work we provide a gold standard for summaries of this kind consisting of a substantial corpus of conference articles in computational linguistics annotated with human judgments of the rhetorical status and relevance of each sentence in the articles we present several experiments measuring our judges agreement on these annotations we also present an algorithm that on the basis of the annotated training material selects content from unseen articles and classifies it into a fixed set of seven rhetorical categories the output of this extraction and classification system can be viewed as a singledocument summary in its own right alternatively it provides starting material for the generation of taskoriented and usertailored summaries designed to give users an overview of a scientific field in this article we propose a strategy for the summarization of scientific articles that concentrates on the rhetorical status of statements in an article material for summaries is selected in such a way that summaries can highlight the new contribution of the source article and situate it with respect to earlier workwe provide a gold standard for summaries of this kind consisting of a substantial corpus of conference articles in computational linguistics annotated with human judgments of the rhetorical status and relevance of each sentence in the articleswe present several experiments measuring our judges agreement on these annotationswe also present an algorithm that on the basis of the annotated training material selects content from unseen articles and classifies it into a fixed set of seven rhetorical categoriesthe output of this extraction and classification system can be viewed as a singledocument summary in its own right alternatively it provides starting material for the generation of taskoriented and usertailored summaries designed to give users an overview of a scientific fieldsummarization systems are often twophased consisting of a content selection step followed by a regeneration stepin the first step text fragments are assigned a score that reflects how important or contentful they arethe highestranking material can then be extracted and displayed verbatim as extracts extracts are often useful in an information retrieval environment since they give users an idea as to what the source document is about but they are texts of relatively low qualitybecause of this it is generally accepted that some kind of postprocessing should be performed to improve the final result by shortening fusing or otherwise revising the material the extent to which it is possible to do postprocessing is limited however by the fact that contentful material is extracted without information about the general discourse context in which the material occurred in the source textfor instance a sentence describing the solution to a scientific problem might give the main contribution of the paper but it might also refer to a previous approach that the authors criticizedepending on its rhetorical context the same sentence should be treated very differently in a summarywe propose in this article a method for sentence and content selection from source texts that adds context in the form of information about the rhetorical role the extracted material plays in the source textthis added contextual information can then be used to make the end product more informative and more valuable than sentence extractsour application domain is the summarization of scientific articlessummarization of such texts requires a different approach from for example that used in the summarization of news articlesfor example barzilay mckeown and elhadad introduce the concept of information fusion which is based on the identification of recurrent descriptions of the same events in news articlesthis approach works well because in the news domain newsworthy events are frequently repeated over a short period of timein scientific writing however similar events are rare the main focus is on new scientific ideas whose main characteristic is their uniqueness and difference from previous ideasother approaches to the summarization of news articles make use of the typical journalistic writing style for example the fact that the most newsworthy information comes first as a result the first few sentences of a news article are good candidates for a summary the structure of scientific articles does not reflect relevance this explicitlyinstead the introduction often starts with general statements about the importance of the topic and its history in the field the actual contribution of the paper itself is often given much laterthe length of scientific articles presents another problemlet us assume that our overall summarization strategy is first to select relevant sentences or concepts and then to synthesize summaries using this materialfor a typical 10 to 20sentence news wire story a compression to 20 or 30 of the source provides a reasonable input set for the second stepthe extracted sentences are still thematically connected and concepts in the sentences are not taken completely out of contextin scientific articles however the compression rates have to be much higher shortening a 20page journal article to a halfpage summary requires a compression to 25 of the originalhere the problematic fact that sentence selection is context insensitive does make a qualitative differenceif only one sentence per two pages is selected all information about how the extracted sentences and their concepts relate to each other is lost without additional information it is difficult to use the selected sentences as input to the second stagewe present an approach to summarizing scientific articles that is based on the idea of restoring the discourse context of extracted material by adding the rhetorical status to each sentence in a documentthe innovation of our approach is that it defines principles for content selection specifically for scientific articles and that it combines sentence extraction with robust discourse analysisthe output of our system is a list of extracted sentences along with their rhetorical status as illustrated in figure 1such lists serve two purposes in themselves they already provide a better characterization of scientific articles than sentence extracts do and in the longer run they will serve as better input material for further processingan extrinsic evaluation shows that the output of our system is already a useful document surrogate in its own rightbut postprocessing could turn teufel and moens summarizing scientific articles aim 10 our research addresses some of the same questions and uses similar raw data but we investigate how to factor word association tendencies into associations of words to certain hidden senses classes and associations between the classes themselves11 while it may be worthwhile to base such a model on preexisting sense classes in the work described here we look at how to derive the classes directly from distributional data162 we have demonstrated that ageneral divisive clustering procedurefor probability distributions can be used to group words according to their participation in particular grammatical relations with other wordscontrast 9 his notion ofsimilarity seems to agree with our intuitions in many cases but it is not clear how it can be used directly to construct word classes and corresponding models of association14 class construction is then combinatorially very demanding and depends on frequency counts for joint events involving particular words a potentially unreliable source of information as we noted abovebasis 19 the corpus used in our first experiment was derived from newswire text automatically parsed by hindles parser fidditch 113 the analogy with statistical mechanics suggests a deterministic annealing procedure for clustering in which the number of clusters is determined through a sequence of phase transitions by continuously increasing the parameter eqn following an annealing schedulenonexpert summary general purpose the rhetorical extracts into something even more valuable the added rhetorical context allows for the creation of a new kind of summaryconsider for instance the useroriented and tasktailored summaries shown in figures 2 and 3their composition was guided by fixed building plans for different tasks and different user models whereby the building blocks are defined as sentences of a specific rhetorical statusin our example most textual material is extracted verbatim the first example is a short abstract generated for a nonexpert user and for general information its first two sentences give background information about the problem tackledthe second abstract is aimed at an expert therefore no background is given and instead differences between this approach and similar ones are describedthe actual construction of these summaries is a complex process involving tasks such as sentence planning lexical choice and syntactic realization tasks that are outside the scope of this articlethe important point is that it is the knowledge about the rhetorical status of the sentences that enables the tailoring of the summaries according to users expertise and taskthe rhetorical status allows for other kinds of applications too several articles can be summarized together contrasts or complementarity among expert summary contrastive links articles can be expressed and summaries can be displayed together with citation links to help users navigate several related papersthe rest of this article is structured as follows section 2 describes the theoretical and empirical aspects of document structure we model in this articlethese aspects include rhetorical status and relatedness these aspects of rhetorical status are encoded in an annotation scheme that we present in section 24annotation of relevance is covered in section 25in section 3 we report on the construction of a gold standard for rhetorical status and relevance and on the measurement of agreement among human annotatorswe then describe in section 4 our system that simulates the human annotationsection 5 presents an overview of the intrinsic evaluation we performed and section 6 closes with a summary of the contribution of this work its limitations and suggestions for future workit is important for our task to find the right definition of rhetorical status to describe the content in scientific articlesthe definition should both capture generalizations about the nature of scientific texts and also provide the right kind of information to enable the construction of better summaries for a practical applicationanother requirement is that the analysis should be applicable to research articles from different presentational traditions and subject mattersfor the development of our scheme we used the chronologically first 80 articles in our corpus of conference articles in computational linguistics acl conferences or workshopsbecause of the interdisciplinarity of the field the papers in this collection cover a challenging range of subject matters such as logic programming statistical language modeling theoretical semantics computational dialectology and computational psycholinguisticsalso the research methodology and tradition of presentation is very different among these fields we thus expect our analysis to be equally applicable in a wider range of disciplines and subdisciplines other than those namedour model relies on the following dimensions of document structure in scientific articlesproblem structureresearch is often described as a problemsolving activity three information types can be expected to occur in any research article problems solutions and resultsin many disciplines particularly the experimental sciences this problemsolution structure has been crystallized in a fixed presentation of the scientific material as introduction method result and discussion but many texts in computational linguistics do not adhere to this presentation and our analysis therefore has to be based on the underlying logical organization using textual representation only as an indicationintellectual attributionscientific texts should make clear what the new contribution is as opposed to previous work and background material we noticed that intellectual attribution has a segmental characterstatements in a segment without any explicit attribution are often interpreted as belonging to the most recent explicit attribution statement our rhetorical scheme assumes that readers have no difficulty in understanding intellectual attribution an assumption that we verified experimentallyscientific argumentationin contrast to the view of science as a disinterested fact factory researchers like swales have long claimed that there is a strong social aspect to science because the success of a researcher is correlated with her ability to convince the field of the quality of her work and the validity of her argumentsauthors construct an argument that myers calls the rhetorical act of the paper the statement that their work is a valid contribution to scienceswales breaks down this rhetorical act into single nonhierarchical argumentative moves his constructing a research space model shows how patterns of these moves can be used to describe the rhetorical structure of introduction sections of physics articlesimportantly swaless moves describe the rhetorical status of a text segment with respect to the overall message of the document and not with respect to adjacent text segmentsattitude toward other peoples workwe are interested in how authors include reference to other work into their argumentin the flow of the argument each piece of other work is mentioned for a specific reason it is portrayed as a rival approach as a prior approach with a fault or as an approach contributing parts of the authors own solutionin wellwritten papers this relation is often expressed in an explicit waythe next section looks at the stylistic means available to the author to express the connection between previous approaches and their own workexplicit metadiscourse is an integral aspect of scientific argumentation and a way of expressing attitude toward previous workexamples for metadiscourse are phrases like we argue that and in contrast to common belief wemetadiscourse is ubiquitous in scientific writing hyland found a metadiscourse phrase on average after every 15 words in running texta large proportion of scientific metadiscourse is conventionalized particularly in the experimental sciences and particularly in the methodology or result section swales lists many such fixed phrases as cooccurring with the moves of his cars model they are useful indicators of overall importance they can also be relatively easily recognized with information extraction techniques paice introduces grammars for pattern matching of indicator phrases eg the aimpurpose of this paperarticlestudy and we concludeproposeapart from this conventionalized metadiscourse we noticed that our corpus contains a large number of metadiscourse statements that are less formalized statements about aspects of the problemsolving process or the relation to other workfigure 4 for instance shows that there are many ways to say that ones research is based on somebody elses the sentences do not look similar on the surface the syntactic subject can be the authors the originators of the method or even the method itselfalso the verbs are very different some sentences use metaphors of change and creationthe wide range of linguistic expression we observed presents a challenge for recognition and correct classification using standard information extraction patternswith respect to agents occurring in scientific metadiscourse we make two suggestions that scientific argumentation follows prototypical patterns and employs recurrent types of agents and actions and that it is possible to recognize many of these automaticallyagents play fixed roles in the argumentation and there are so statements expressing research continuation with source article numberteufel and moens summarizing scientific articles few of these roles that they can be enumerated agents appear as rivals as contributors of part of the solution as the entire research community in the field or as the authors of the paper themselves note the similarity of agent roles to the three kinds of intellectual attribution mentioned abovewe also propose prototypical actions frequently occurring in scientific discourse the field might agree a particular researcher can suggest something and a certain solution could either fail or be successfulin section 4 we will describe the three features used in our implementation that recognize metadiscourseanother important construct that expresses relations to other researchers work is formal citations to which we will now turncitation indexes are constructs that contain pointers between cited texts and citing texts traditionally in printed formwhen done online citations are presented in context for users to browsebrowsing each citation is timeconsuming but useful just knowing that an article cites another is often not enoughone needs to read the context of the citation to understand the relation between the articlescitations may vary in many dimensions for example they can be central or perfunctory positive or negative apart from scientific reasons there is also a host of social reasons for citing we concentrate on two citation contexts that are particularly important for the information needs of researchers a distinction among these contexts would enable us to build more informative citation indexeswe suggest that such a rhetorical distinction can be made manually and automatically for each citation we use a large corpus of scientific papers along with humans judgments of this distinction to train a system to make such distinctionsour rhetorical annotation scheme encodes the aspects of scientific argumentation metadiscourse and relatedness to other work described beforethe categories are assigned to full sentences but a similar scheme could be developed for clauses or phrasesthe annotation scheme is nonoverlapping and nonhierarchical and each sentence must be assigned to exactly one categoryas adjacent sentences of the same status can be considered to form zones of the same rhetorical status we call the units rhetorical zonesthe shortest of these zones are one sentence longthe rhetorical status of a sentence is determined on the basis of the global context of the paperfor instance whereas the other category describes all neutral descriptions of other researchers work the categories basis and contrast are applicable to sentences expressing a research continuation relationship or a contrast to other workgenerally accepted knowledge is classified as background whereas the authors own work is separated into the specific research goal and all other statements about the authors own work the annotation scheme expresses important discourse and argumentation aspects of scientific articles but with its seven categories it is not designed to model the full complexity of scientific textsthe category own for instance could be further subdivided into method results and further work which is not done in the work reported herethere is a conflict between explanatory power and the simplicity necessary for reliable human and automatic classification and we decided to restrict ourselves to the rhetorical distinctions that are most salient and potentially most useful for several information access applicationsthe usertailored summaries and more informative citation indexes we mentioned before are just two such applications another one is the indexing and previewing of the internal structure of the articleto make such indexing and previewing possible our scheme contains the additional category textual which captures previews of section structure such previews would make it possible to label sections with the authors indication of their contentsour rhetorical analysis as noted above is nonhierarchical in contrast to rhetorical structure theory and it concerns text pieces at a lower level of granularityalthough we do agree with rst that the structure of text is hierarchical in many cases it is our belief that the relevance and function of certain text pieces can be determined without analyzing the full hierarchical structure of the textanother difference between our analysis and that of rst is that our analysis aims at capturing the rhetorical status of a piece of text in respect to the overall message and not in relation to adjacent pieces of textas our immediate goal is to select important content from a text we also need a second set of gold standards that are defined by relevance relevance is a difficult issue because it is situational to a unique occasion humans perceive relevance differently from each other and differently in different situationspaice and jones report that they abandoned an informal sentence selection experiment in which they used agriculture articles and experts in the field as participants as the participants were too strongly influenced by their personal research interestas a result of subjectivity a number of human sentence extraction experiments over the years have resulted in low agreement figuresrath resnick and savage report that six participants agreed on only 8 of 20 sentences they were asked to select out of short scientific american texts and that five agreed on 32 of the sentencesthey found that after six weeks subjects selected on average only 55 of the sentences they themselves selected previouslyedmundson and wyllys teufel and moens summarizing scientific articles find similarly low human agreement for research articlesmore recent experiments reporting more positive results all used news text as discussed above the compression rates on news texts are far lower there are fewer sentences from which to choose making it easier to agree on which ones to selectsentence selection from scientific texts also requires more background knowledge thus importing an even higher level of subjectivity into sentence selection experimentsrecently researchers have been looking for more objective definitions of relevancekupiec pedersen and chen define relevance by abstract similarity a sentence in a document is considered relevant if it shows a high level of similarity to a sentence in the abstractthis definition of relevance has the advantage that it is fixed it relies however on two assumptions that the writing style is such that there is a high degree of overlap between sentences in the abstract and in the main text and that the abstract is indeed the target output that is most adequate for the final taskin our case neither assumption holdsfirst the experiments in teufel and moens showed that in our corpus only 45 of the abstract sentences appear elsewhere in the body of the document whereas kupiec pedersen and chen report a figure of 79we believe that the reason for the difference is that in our case the abstracts were produced by the document authors and by professional abstractors in kupiec pedersen and chens caseauthor summaries tend to be less systematic and more deep generated whereas summaries by professional abstractors follow an internalized building plan and are often created through sentence extraction second and more importantly the abstracts and improved citation indexes we intend to generate are not modeled on traditional summaries which do not provide the type of information needed for the applications we have in mindinformation about related work plays an important role in our strategy for summarization and citation indexing but such information is rarely found in abstractswe empirically found that the rhetorical status of information occurring in author abstracts is very limited and consists mostly of information about the goal of the paper and specifics of the solutiondetails of the analysis we conducted on this topic are given in section 322we thus decided to augment our corpus with an independent set of human judgments of relevancewe wanted to replace the vague definition of relevance often used in sentence extraction experiments with a more operational definition based on rhetorical statusfor instance a sentence is considered relevant only if it describes the research goal or states a difference with a rival approachmore details of the instructions we used to make the relevance decisions are given in section 3thus we have two parallel human annotations in our corpus rhetorical annotation and relevance selectionin both tasks each sentence in the articles is classified each sentence receives one rhetorical category and also the label irrelevant or relevantthis strategy can create redundant material but this redundancy also helps mitigate one of the main problems with sentencebased gold standards namely the fact that there is no one single best extract for a documentin our annotation all qualifying sentences in the document are identified and classified into the same group which makes later comparisons with system performance faireralso later steps cannot only find redundancy in the intermediate result and remove it but also use the redundancy as an indication of importanceexample of manual annotation relevant sentences with rhetorical statusfigure 5 gives an example of the manual annotationrelevant sentences of all rhetorical categories are shownour system creates a list like the one in figure 5 automatically in the next section we turn to the manual annotation step and the development of the gold standard used during system training and system evaluationfor any linguistic analysis that requires subjective interpretation and that is therefore not objectively true or false it is important to show that humans share some intuitions about the analysisthis is typically done by showing that they can apply it independently of each other and that the variation they display is bounded the argument is strengthened if the judges are people other than the developers of the analysis preferably naive subjects apart from the cognitive validation of our analysis high agreement is essential if the annotated corpus is to be used as training material for a machine learning process like the one we describe in section 4noisy and unreliably annotated training material will very likely deteriorate the classification performancein inherently subjective tasks it is also common practice to consider human performance as an upper boundthe theoretically best performance of a system is reached if agreement among a pool of human annotators does not decrease when the system is added to the poolthis is so because an automatic process cannot do any better in this situation than to be indistinguishable from human performancethe annotated development corpus consists of 80 conference articles in computational linguistics it is part of a larger corpus of 260 articles that we collected from the cmp lg archive the appendix lists the 80 articles of our development corpus it consists of the 80 chronologically oldest articles in the larger corpus containing articles deposited between may 1994 and may 1996 papers were included if they were presented at one of the following conferences the annual meeting of the association for computational linguistics the meeting of the european chapter of the association for computational linguistics the conference on applied natural language processing the international joint conference on artificial intelligence or the international conference on computational linguistics as mentioned above a wide range of different subdomains of the field of computational linguistics are coveredwe added extensible markup language markup to the corpus titles authors conference date abstract sections headlines paragraphs and sentences were marked upequations tables images were removed and replaced by placeholdersbibliography lists were marked up and parsedcitations and occurrences of author names in running text were recognized and selfcitations were recognized and specifically marked up example sentences and example pseudocode were manually marked up such that clean textual material was isolated for automatic processingthe implementation uses the text tokenization toolkit software the annotation experiment described here tests the rhetorical annotation scheme presented in section 24annotatorsthree tasktrained annotators were used annotators a and b have degrees in cognitive science and speech therapythey were paid for the experimentboth are wellused to reading scientific articles for their studies and roughly understand the contents of the articles they annotated because of the closeness of their fields to computational linguisticsannotator c is the first authorwe did not want to declare annotator c the expert annotator we believe that in subjective tasks like the one described here there are no real expertsguidelineswritten guidelines describe the semantics of the categories ambiguous cases and decision strategiesthe guidelines also include the decision tree reproduced in figure 6trainingannotators received a total of 20 hours of trainingtraining consisted of the presentation of annotation of six example papers and the annotation of eight training articles under real conditions in subsequent training sessions decision criteria for difficult cases encountered in the training articles were discussedobviously the training articles were excluded from measurements of human agreementmaterials and proceduretwentyfive articles were used for annotationas no annotation tool was available at the time annotation was performed on paper the categories were later transferred to the electronic versions of the articles by handskimreading and annotation typically took between 20 and 30 minutes per article but there were no time restrictionsno communication between the annotators was allowed during annotationsix weeks after the initial annotation annotators were asked to reannotate 6 random articles out of the 25evaluation measureswe measured two formal properties of the annotation stability and reproducibility stability the extent to which one annotator will produce the same classifications at different times is important because an unstable annotation scheme can never be reproduciblereproducibility the extent to which different annotators will produce the same classifications is important because it measures the consistency of shared understandings held between annotatorsdoes this sentence refer to new current work by the authors or support for the current paperteufel and moens summarizing scientific articles we use the kappa coefficient k to measure stability and reproducibility following carletta the kappa coefficient is defined as follows where p is pairwise agreement and p random agreementk varies between 1 when agreement is perfect and 1 when there is a perfect negative correlationk 0 is defined as the level of agreement that would be reached by random annotation using the same distribution of categories as the actual annotators didthe main advantage of kappa as an annotation measure is that it factors out random agreement by numbers of categories and by their distributionas kappa also abstracts over the number of annotators considered it allows us to compare the agreement numerically among a group of human annotators with the agreement between the system and one or more annotators which we use as one of the performance measures of the system322 resultsthe annotation experiments show that humans distinguish the seven rhetorical categories with a stability of k 82 81 76 annotated and k for the number of annotatorsthis is equivalent to 93 92 and 90 agreementreproducibility was measured at k 71 which is equivalent to 87 agreementon krippendorffs scale agreement of k 8 or above is considered as reliable agreement of 678 as marginally reliable and less than 67 as unreliableon landis and kochs more forgiving scale agreement of 02 is considered as showing slight correlation 214 as fair 416 as moderate 61 08 as substantial and 81 10 as almost perfect according to these guidelines our results can be considered reliable substantial annotationtable 2 shows a confusion matrix between two annotatorsthe numbers represent absolute sentence numbers and the diagonal are the counts of sentences that were identically classified by both annotatorswe used krippendorffs diagnostics to determine which particular categories humans had most problems with for each category agreement is measured with a new data set in which all categories distribution of rhetorical categories except for the category of interest are collapsed into one metacategoryoriginal agreement is compared to that measured on the new data set high values show that annotators can distinguish the given category well from all otherswhen their results are compared to the overall reproducibility of k 71 the annotators were good at distinguishing aim and textual the high agreement in aim sentences is a positive result that seems to be at odds with previous sentence extraction experimentswe take this as an indication that some types of rhetorical classification are easier for human minds to do than unqualified relevance decisionwe also think that the positive results are partly due to the existence of the guidelinesthe annotators were less consistent at determining basis and contrast the same picture emerges if we look at precision and recall of single categories between two annotators precision and recall for aim and textual are high at 7256 and 7979 whereas they are lower for contrast and basis this contrast in agreement might have to do with the location of the rhetorical zones in the paper aim and textual zones are usually found in fixed locations and are explicitly marked with metadiscourse whereas contrast sentences and even more so basis sentences are usually interspersed within longer own zonesas a result these categories are more exposed to lapses of attention during annotationwith respect to the longer more neutral zones annotators often had problems in distinguishing other work from own work particularly in cases where the authors did not express a clear distinction between new work and previous own work another persistently problematic distinction for our annotators was that between own teufel and moens summarizing scientific articles and backgroundthis could be a sign that some authors aimed their papers at an expert audience and thus thought it unnecessary to signal clearly which statements are commonly agreed upon in the field as opposed to their own new claimsif a paper is written in such a way it can indeed be understood only with a considerable amount of domain knowledge which our annotators did not havebecause intellectual attribution is an important part of our annotation scheme we conducted a second experiment measuring how well our annotators could distinguish just these three roles using the same annotators and 22 different articleswe wrote seven pages of new guidelines describing the semantics of the three categoriesresults show higher stability compared to the full annotation scheme and higher reproducibility corresponding to 94 93 and 93 agreement and 93 it is most remarkable that agreement of annotation of intellectual attribution in the abstracts is almost perfect k 98 corresponding to 99 agreementthis points to the fact that authors when writing abstracts for their papers take care to make it clear to whom a certain statement is attributedthis effect also holds for the annotation with the full scheme with all seven categories again reproducibility in the abstract is higher than in the entire document but the effect is much weakerabstracts might be easier to annotate than the rest of a paper but this does not necessarily make it possible to define a gold standard solely by looking at the abstractsas foreshadowed in section 25 abstracts do not contain all types of rhetorical informationaim and own sentences make up 74 of the sentences in abstracts and only 5 of all contrast sentences and 3 of all basis sentences occur in the abstractabstracts in our corpus are also not structurally homogeneouswhen we inspected the rhetorical structure of abstracts in terms of sequences of rhetorical zones we found a high level of variationeven though the sequence aimown is very common the 80 abstracts still contain 40 different rhetorical sequences 28 of which are uniquethis heterogeneity is in stark contrast to the systematic structures liddy found to be produced by professional abstractorsboth observations the lack of certain rhetorical types in the abstracts and their rhetorical heterogeneity reassure us in our decision not to use humanwritten abstracts as a gold standardwe collected two different kinds of relevance gold standards for the documents in our development corpus abstractsimilar document sentences and additional manually selected sentencesin order to establish alignment between summary and document sentences we used a semiautomatic method that relies on a simple surface similarity measure as in kupiec pedersen and chens experiment final alignment was decided by a human judge and the criterion was semantic similarity of the two sentencesthe following sentence pair illustrates a direct match summary in understanding a reference an agent determines his confidence in its adequacy as a means of identifying the referentdocument an agent understands a reference once he is confident in the adequacy of its plan as a means of identifying the referentof the 346 abstract sentences contained in the 80 documents 156 could be aligned this waybecause of this low agreement and because certain rhetorical types are not present in the abstracts we decided not to rely on abstract alignment as our only gold standardinstead we used manually selected sentences as an alternative gold standard which is more informative but also more subjectivewe wrote eight pages of guidelines that describe relevance criteria the first author annotated all documents in the development corpus for relevance using the rhetorical zones and abstract similarity as aides in the relevance decision and also skimreading the whole paper before making the decisionthis resulted in 5 to 28 sentences per paper and a total of 1183 sentencesimplicitly rhetorical classification of the extracted sentences was already given as each of these sentences already had a rhetorical status assigned to ithowever the rhetorical scheme we used for this task is slightly differentwe excluded textual as this category was designed for document uses other than summarizationif a selected sentence had the rhetorical class textual it was reclassified into one of the other six categoriesfigure 8 shows the resulting category distribution among these 1183 sentences which is far more evenly distributed than the one covering all sentences contrast and own are the two most frequent categorieswe did not verify the relevance annotation with human experimentswe accept that the set of sentences chosen by the human annotator is only one possible gold standardwhat is more important is that humans can agree on the rhetorical status of the relevant sentencesliddy observed that agreement on rhetorical status was easier for professional abstractors than sentence selection although they did not necessarily agree on which individual sentences should go into an abstract they did agree on the rhetorical information types that make up a good abstractwe asked our trained annotators to classify a set of 200 sentences randomly sampled from the 1183 sentences selected by the first author into the six rhetorical categoriesthe sentences were presented in order of occurrence in the document but without any context in terms of surrounding sentenceswe measured stability at k 9 86 83 and reproducibility at k 84 these results are reassuring they show that the rhetorical status for important sentences can be particularly well determined better than rhetorical status for all sentences in the document distribution of rhetorical categories we now describe an automatic system that can perform extraction and classification of rhetorical status on unseen text we decided to use machine learning to perform this extraction and classification based on a variety of sentential features similar to the ones reported in the sentence extraction literaturehuman annotation is used as training material such that the associations between these sentential features and the target sentences can be learnedit is also used as gold standard for intrinsic system evaluationa simpler machine learning approach using only word frequency information and no other features as typically used in tasks like text classification could have been employed to test if such a simple approach would be enough we performed a text categorization experiment using the rainbow implementation of a naive bayes term frequency times inverse document frequency method and considering each sentence as a document the result was a classification performance of k 30 the classifier nearly always chooses own and other segmentsthe rare but important categories aim background contrast and basis could be retrieved only with low precision and recalltherefore text classification methods do not provide a solution to our problemthis is not surprising given that the definition of our task has little to do with the distribution of contentbearing words and phrases much less so than the related task of topic segmentation or saggion and lapalmes approach to the summarization of scientific articles which relies on scientific concepts and their relationsinstead we predict that other indicators apart from the simple words contained in the sentence could provide strong evidence for the modeling of rhetorical statusalso the relatively small amount of training material we have at our disposal requires a machine learning method that makes optimal use of as many different kinds of features as possiblewe predicted that this would increase precision and recall on the categories in which we are interestedthe text classification experiment is still useful as it provides a nontrivial baseline for comparison with our intrinsic system evaluation presented in section 5we use a naive bayesian model as in kupiec pedersen and chens experiment sentential features are collected for each sentence learning is supervised in the training phase associations between these features and humanprovided target categories are learnedthe target categories are the seven categories in the rhetorical annotation experiment and relevantnonrelevant in the relevance selection experimentin the testing phase the trained model provides the probability of each target category for each sentence of unseen text on the basis of the sentential features identified for the sentencesome of the features in our feature pool are unique to our approach for instance the metadiscourse featuresothers are borrowed from the text extraction literature or related tasks and adapted to the problem of determining rhetorical statusabsolute location of a sentencein the news domain sentence location is the single most important feature for sentence selection in our domain location information although less dominant can still give a useful indicationrhetorical zones appear in typical positions in the article as scientific argumentation teufel and moens summarizing scientific articles follows certain patterns for example limitations of the authors own method can be expected to be found toward the end of the article whereas limitations of other researchers work are often discussed in the introductionwe observed that the size of rhetorical zones depends on location with smaller rhetorical zones occurring toward the beginning and the end of the articlewe model this by assigning location values in the following fashion the article is divided into 20 equal parts counting sentencessentences occurring in parts 1 2 3 4 19 and 20 receive the values a b c d i and j respectivelyparts 5 and 6 are pooled and sentences occurring in them are given the value e the same procedure is applied to parts 15 and 16 and 17 and 18 the remaining sentences in the middle all receive the value f section structuresections can have an internal structuring for instance sentences toward the beginning of a section often have a summarizing functionthe section location feature divides each section into three parts and assigns seven values first sentence last sentence second or third sentence secondlast or thirdlast sentence or else either somewhere in the first second or last third of the sectionparagraph structurein many genres paragraphs also have internal structure with highlevel or summarizing sentences occurring more often at the periphery of paragraphsin this feature sentences are distinguished into those leading or ending a paragraph and all othersheadlinesprototypical headlines can be an important predictor of the rhetorical status of sentences occurring in the given section however not all texts in our collection use such headlineswhenever a prototypical headline is recognized it is classified into one of the following 15 classes introduction implementation example conclusion result evaluation solution experiment discussion method problems related work data further work problem statementif none of the patterns match the value nonprototypical is assignedsentence lengthkupiec pedersen and chen report sentence length as a useful feature for text extractionin our implementation sentences are divided into long or short sentences by comparison to a fixed threshold title word contentssentences containing many contentbearing words have been hypothesized to be good candidates for text extractionbaxendale extracted all words except those on the stop list from the title and the headlines and determined for each sentence whether or not it contained these wordswe received better results by excluding headline words and using only title wordstfidf word contentshow contentbearing a word is can also be measured with frequency counts the tfidf formula assigns high values to words that occur frequently in one document but rarely in the overall collection of documentswe use the 18 highestscoring tfidf words and classify sentences into those that contain one or more of these words and those that do notverb syntaxlinguistic features like tense and voice often correlate with rhetorical zones biber and riley show correlation of tense and voice with prototypical section structure in addition the presence or absence values of location feature of a modal auxiliary might be relevant for detecting the phenomenon of hedging for each sentence we use partofspeechbased heuristics to determine tense voice and presence of modal auxiliariesthis algorithm is shared with the metadiscourse features and the details are described belowcitationthere are many connections between citation behavior and relevance or rhetorical statusfirst if a sentence contains a formal citation or the name of another author mentioned in the bibliography it is far more likely to talk about other work than about own worksecond if it contains a selfcitation it is far more likely to contain a direct statement of continuation than a criticism third the importance of a citation has been related to the distinction between authorial and parenthetical citationscitations are called authorial if they form a syntactically integral part of the sentence or parenthetical if they do not in most cases authorial citations are used as the subject of a sentence and parenthetical ones appear toward the middle or the end of the sentencewe built a recognizer for formal citationsit parses the reference list at the end of the article and determines whether a citation is a selfcitation and it also finds occurrences of authors names in running text but outside of formal citation contexts the citation feature reports whether a sentence contains an author name a citation or nothingif it contains a citation the value records whether it is a selfcitation and also records the location of the citation in the sentence this last distinction is a heuristic for the authorialparenthetical distinctionwe also experimented with including the number of different citations in a sentence but this did not improve resultshistoryas there are typical patterns in the rhetorical zones we wanted to include the category assigned to the previous sentence as one of the featuresin unseen text however the previous target is unknown at training time it can however be calculated as a second pass process during trainingin order to avoid a full viterbi search of all possibilities we perform a beam search with width of three among the candidates of the previous sentence following barzilay et al formulaic expressionswe now turn to the last three features in our feature pool the metadiscourse features which are more sophisticated than the other featuresthe first metadiscourse feature models formulaic expressions like the ones described by swales as they are semantic indicators that we expect to be helpful for rhetorical classificationwe use a list of phrases described by regular expressions similar to paices grammarour list is divided into 18 semantic classes comprising a total of 644 patternsthe fact that phrases are clustered is a simple way of dealing with data sparsenessin fact our experiments in section 512 will show the usefulness of our semantic clusters the clustered list performs much better than the unclustered list agentagents and actions are more challenging to recognizewe use a mechanism that dependent on the voice of a sentence recognizes agents and their predicates classification of agents and actions relies on a manually created lexicon of manual classesas in the formulaic feature similar agents and actions are generalized and clustered together to avoid data sparsenessthe lexicon for agent patterns contains 13 types of agents and a total of 167 patternsthese 167 patterns expand to many more strings as we use a replace mechanism the main three agent types we distinguish are us agent them agent and general agent following the types of intellectual attribution discussed abovea fourth type is us previous agent additional agent types include nonpersonal agents like aims problems solutions absence of solution or textual segmentsthere are four equivalence classes of agent classes were created based on intuition but subsequently each class was tested with corpus statistics to determine whether it should be removed or notwe wanted to find and exclude classes that had a distribution very similar to the overall distribution of the target categories as such features are not distinctivewe measured associations using the loglikelihood measure for each combination of target category and semantic class by converting each cell of the contingency into a 22 contingency tablewe kept only classes of verbs in which at least one category showed a high association as that means that in these cases the distribution was significantly different from the overall distributionthe last column in table 6 shows that the classes them pronoun general solution problem and ref were removed removal improved the performance of the agent featuresegagentsegagent is a variant of the agent feature that keeps track of previously recognized agents unmarked sentences receive these previous agents as a value actionwe use a manually created action lexicon containing 365 verbs the verbs are clustered into 20 classes based on semantic concepts such as similarity contrast competition presentation argumentation and textual structurefor example presentation actions include communication verbs like present report and state research actions include analyze conduct define and observe and argumentation actions include argue disagree and object todomainspecific actions are contained in the classes indicating a problem and solutioncontributing actions the action lexiconaffect we hope to improve our results 9 x argumentation we argue against a model of 19 x awareness we are not aware of attempts 5 teufel and moens summarizing scientific articles recognition of negation is essential the semantics of not solving is closer to being problematic than it is to solvingthe following classes were removed by the gscore test described above because their distribution was too similar to the overall distribution future interest need argumentation affect in both negative and positive contexts and awareness only in positive context the following classes had too few occurrences in negative context and thus the negative context of the class was also removed better solution contrast presentation problem again the removal improved the performance of the action featurethe algorithm for determining agents and actions relies on finitestate patterns over partofspeech tagsstarting from each finite verb the algorithm collects chains of auxiliaries belonging to the associated finite clause and thus determines the clauses tense and voiceother finite verbs and commas are assumed to be clause boundariesonce the semantic verb is found its stem is looked up in the action lexiconnegation is determined if one of 32 fixed negation words is present in a sixword window to the right of the finite verbas our classifier requires one unique value for each classified item for each feature we had to choose one value for sentences containing more than one finite clausewe return the following values for the action and agents feature the first agentaction pair if both are nonzero otherwise the first agent without an action otherwise the first action without an agent if availablein order to determine the level of correctness of agent and action recognition we had first to evaluate manually the error level of the pos tagging of finite verbs as our algorithm crucially relies on finite verbsin a random sample of 100 sentences from our development corpus that contain any finite verbs at all the tagger showed a recall of 95 and a precision of 93we found that for the 174 correctly determined finite verbs the heuristics for negation and presence of modal auxiliaries worked without any errors the correct semantic verb was determined with 96 accuracy most errors were due to misrecognition of clause boundariesaction type lookup was fully correct even in the case of phrasal verbs and longer idiomatic expressions there were seven voice errors two of which were due to postagging errors the remaining five voice errors correspond to 98 accuracycorrectness of agent type determination was tested on a random sample of 100 sentences containing at least one agent resulting in 111 agentsno agent pattern that should have been identified was missed of the 111 agents 105 cases were correct therefore we consider the two features to be adequately robust to serve as sentential features in our systemhaving detailed the features and classifiers of the machine learning system we use we will now turn to an intrinsic evaluation of its performanceour task is to perform content selection from scientific articles which we do by classifying sentences into seven rhetorical categoriesthe summaries based on this classification use some of these sentences directly namely sentences that express the contribution of a particular article sentences expressing contrasts with other work and sentences stating imported solutions from other work other more frequent rhetorical categories namely other own and background might also be extracted into the summarybecause the task is a mixture of extraction and classification we report system performance as follows we first report precision and recall values for all categories in comparison to human performance and the text categorization baseline as we are primarily interested in good performance on the categories aim contrast basis and backgroundthe results of stochastic classification were compiled with a 10fold crossvalidation on our 80paper corpusas we do not have much annotated material crossvalidation is a practical way to test as it can make use of the full development corpus for training without ever using the same data for training and testing substantial improvement over the baseline in terms of precision and recall of the important categories aim background contrast and basiswe use the fmeasure defined by van rijsbergen as 2pr pr as a convenient way of reporting precision and recall in one valuefmeasures for our categories range from 61 and 52 to 45 38 and 26 the recall for some categories is relatively lowas our gold standard is designed to contain a lot of redundant information for the same category this is not too worryinglow precision in some categories however could potentially present a problem for later steps in the document summarization processoverall we find these results encouraging particularly in view of the subjective nature of the task and the high compression achieved no direct comparison with kupiec pedersen and chens results is possible as different data sets are used and as kupiec et als relevant sentences do not directly map into one of our categoriesassuming however that their relevant sentences are probably most comparable to our aim sentences our precision and recall of 44 and 65 compare favorably to theirs table 9 shows a confusion matrix between one annotator and the systemthe system is likely to confuse aim and own sentences it also shows a tendency to confuse other and own sentencesthe system also fails to distinguish categories involving other peoples work overall these tendencies mirror human errors as can be seen from a comparison with table 2table 10 shows the results in terms of three overall measures kappa percentage accuracy and macrof macrof is the mean of the fmeasures of all seven categoriesone reason for using macrof and kappa is that we want to measure success particularly on the rare categories that are needed for our final task microaveraging techniques like traditional accuracy tend to overestimate the contribution of frequent categories in skewed distributions like ours this is undesirable as own is the least interesting category for our purposesthis situation has parallels in information retrieval where precision and recall are used because accuracy overestimates the performance on irrelevant itemsin the case of macrof each category is treated as one unit independent of the number of items contained in ittherefore the classification success of the individual items in rare categories is given more importance than the classification success of frequentcategory itemswhen looking at the numerical values however one should keep in mind that macroaveraging results are in general numerically lower this is because there are fewer training cases for the rare categories which therefore perform worse with most classifiersin the case of kappa classifications that incorrectly favor frequent categories are punished because of a high random agreementthis effect can be shown most easily when the baselines are consideredthe most ambitious baseline we use is the output of a text categorization system as described in section 4other possible baselines which are all easier to beat include classification by the most frequent categorythis baseline turns out to be trivial as it does not extract any of the rare rhetorical categories in which we are particularly interested and therefore receives a low kappa value at k 12possible chance baselines include random annotation with uniform distribution and random annotation with observed distributionthe latter baseline is built into the definition of kappa although our system outperforms an ambitious baseline and also performs much above chance there is still a big gap in performance between humans and machinemacrof shows a 20 difference between our system and human performanceif the system is put into a pool of annotators for the 25 articles for which threeway human judgment exists agreement drops from k 71 to k 59this is a clear indication that the systems annotation is still distinguishably different from human annotation the optimal feature combination the most distinctive single feature is location followed by segagent citations headlines agent and formulaic in each case the unclustered versions of agent segagent and formulaic performed much worse than the clustered versions they did not improve final results when added into the feature poolaction performs slightly better at k 11 than the baseline by most frequent category but far worse than random by observed distributionthe following features on their own classify each sentence as own relative location paragraphs tfidf title sentence length modality tense and voicehistory performs very badly on its own at k 51 it classifies almost all sentences as backgroundit does this because the probability of the first sentences being a background sentence is almost one and if no other information is available it is very likely that another background sentence will follow after a background sentenceeach of these features however still contributes to the final result if any of them is taken out of the feature pool classification performance decreaseshow can this be given that the individual features perform worse than chanceas the classifier derives the posterior probability by multiplying evidence from each feature even slight evidence coming from one feature can direct the decision in the right directiona feature that contributes little evidence on its own can thus in combination with others still help in disambiguatingfor the naive bayesian classification method indeed it is most important that the features be as independent of each other as possiblethis property cannot be assessed by looking at the features isolated performance but only in combination with othersit is also interesting to see that certain categories are disambiguated particularly well by certain features the formulaic feature which is by no means the strongest feature is nevertheless the most diverse as it contributes to the disambiguation of six categories directlythis is because many different rhetorical categories have typical cue phrases associated with them not surprisingly location and history are the features particularly useful for detecting background sentences and segagent additionally contributes toward the determination of background zones the agent and action features also prove their worth as they manage to disambiguate categories that many of the other features alone cannot disambiguate of how the figures reported in the previous section translate into real output we present in figure 12 the output of the system when run on the example paper the second column shows whether the human judge agrees with the systems decision ten out of the 15 extracted sentences have been classified correctlythe example also shows that the determination of rhetorical status is not always straightforwardfor example whereas the first aim sentence that the system proposes is clearly wrong all other incorrect aim sentences carry important insystem output for example paper formation about research goals of the paper sentence 41 states the goal in explicit terms but it also contains a contrastive statement which the annotator decided to rate higher than the goal statementboth sentences 12 and 150 give highlevel descriptions of the work that might pass as a goal statementsimilarly in sentence 21 the agent and action features detected that the first part of the sentence has something to do with comparing methods and the system then decided to classify the sentence as contrastall in all we feel that the extracted material conveys the rhetorical status adequatelyan extrinsic evaluation additionally showed that the end result provides considerable added value when compared to sentence extracts the classifier for rhetorical status that we evaluated in the previous section is an important first step in our implementation the next step is the determination of relevant sentences in the textone simple solution for relevance decision would be to use all aim basis and contrast sentences as these categories are rare overallthe classifier we use has the nice property of roughly keeping the distribution of target categories so that we end up with a sensible number of these sentencesthe strategy of using all aim contrast and basis sentences can be evaluated in a similar vein to the previous experimentin terms of relevance the asterisk in figure 12 marks sentences that the human judge found particularly relevant in the overall context six out of all 15 sentences and 6 out of the 10 sentences that received the correct rhetorical status were judged relevant in the exampletable 12 reports the figure for the entire corpus by comparing the systems output of correctly classified rhetorical categories to human judgmentin all cases the results are far above the nontrivial baselineon aim contrast and basis sentences our system achieves very high precision values of 96 70 and 71recall is lower at 70 24 and 39 but low recall is less of a problem in our final tasktherefore the main bottleneck is correct rhetorical classificationonce that is accomplished the selected categories show high agreement with human judgment and should therefore represent good material for further processing stepsif however one is also interested in selecting background sentences as we are simply choosing all background sentences would result in low precision of 16 which does not seem to be the optimal solutionwe therefore use a second classifier for finding the most relevant sentences independently that was trained on the relevance gold standardour best classifier operates at a precision of 465 and recall of 452 the second classifier raises the precision for background sentences from 16 to 38 while keeping recall high at 88this example shows that the right procedure for relevance determination changes from category to category and also depends on the final task one is trying to accomplishwe have presented a new method for content selection from scientific articlesthe analysis is genrespecific it is based on rhetorical phenomena specific to academic writing such as problemsolution structure explicit intellectual attribution and statements of relatedness to other workthe goal of the analysis is to identify the contribution of an article in relation to background material and to other specific current workour methodology is situated between text extraction methods and fact extraction methods although our analysis has the advantage of being more contextsensitive than text extraction methods it retains the robustness of this approach toward different subdomains presentational traditions and writing styleslike fact extraction methods our method also uses a template whose slots are being filled during analysisthe slots of our template are defined as rhetorical categories rather than by domainspecific categories this makes it possible for our approach to deal with texts of different domains and unexpected topicssparck jones argues that it is crucial for a summarization strategy to relate the largescale document structure of texts to readers tasks in the real world we feel that incorporating a robust analysis of discourse structure into a document summarizer is one step along this wayour practical contributions are twofoldfirst we present a scheme for the annotation of sentences with rhetorical status and we have shown that the annotation is stable and reproducible since these results indicate that the annotation is reliable we use it as our gold standard for evaluation and trainingsecond we present a machine learning system for the classification of sentences by relevance and by rhetorical statusthe contribution here is not the statistical classifier which is wellknown and has been used in a similar task by kupiec pedersen and oren but instead the features we usewe have adapted 13 sentential features in such a way that they work robustly for our task we also present three new features that detect scientific metadiscourse in a novel waythe results of an intrinsic system evaluation show that the system can identify sentences expressing the specific goal of a paper with 57 precision and 79 recall sentences expressing criticism or contrast with 57 precision and 42 recall and sentences expressing a continuation relationship to other work with 62 precision and 43 recallthis substantially improves a baseline of text classification which uses only a tfidf model over wordsthe agreement of correctly identified rhetorical roles with human relevance judgments is even higher we see these results as an indication that shallow discourse processing with a welldesigned set of surfacebased indicators is possiblethe metadiscourse features one focus of our work currently depend on manual resourcesthe experiments reported here explore whether metadiscourse information is useful for the automatic determination of rhetorical status and this is clearly the casethe next step however should be the automatic creation of such resourcesfor the task of dialogue act disambiguation samuel carberry and vijayshanker suggest a method of automatically finding cue phrases for disambiguationit may be possible to apply this or a similar method to our data and to compare the performance of automatically gained resources with manual onesfurther work can be done on the semantic verb clusters described in section 42klavans and kan who use verb clusters for document classification according to genre observe that verb information is rarely used in current practical natural language applicationsmost tasks such as information extraction and document classification identify and use nominal constructs instead the verb clusters we employ were created using our intuition of which type of verb similarity would be useful in the genre and for the taskthere are good reasons for using such a handcrafted genrespecific verb lexicon instead of a general resource such as wordnet or levins classes many verbs used in the domain of scientific argumentation have assumed a specialized meaning which our lexicon readily encodesklavans and kans classes which are based on levins classes are also manually createdresnik and diab present yet other measures of verb similarity which could be used to arrive at a more datadriven definition of verb classeswe are currently comparing our verb clusterings to klavans and kans and to bottomup clusters of verb similarities generated from our annotated datathe recognition of agents which is already the secondbest feature in the pool could be further improved by including named entity recognition and anaphora resolutionnamed entity recognition would help in cases like the following lhip provides a processing method which allows selected portions of the input to be ignored or handled differently where lhip is the name of the authors approach and should thus be tagged as us agent to do so however one would need to recognize it as a named approach which is associated with the authorsit is very likely that such a treatment which would have to include information from elsewhere in the text would improve results particularly as named approaches are frequent in the computational linguistics domaininformation about named approaches in themselves would also be an important aspect to include in summaries or citation indexesanaphora resolution helps in cases in which the agent is syntactically ambiguous between own and other approaches to test whether and how much performance would improve we manually simulated anaphora resolution on the 632 occurrences of ref agent in the development corpusof the 632 ref agents 436 were classified as us agent 175 as them agent and 20 as general agentas a result of this manual disambiguation the performance of the agent feature increased dramatically from k 08 to k 14 and that of segagent from k 19 to k 22this is a clear indication of the potential added value of anaphora resolution for our taskas far as the statistical classification is concerned our results are still far from perfectobvious ways of improving performance are the use of a more sophisticated statistical classifier and more training materialwe have experimented with a maximum entropy model repeated incremental pruning to produce error reduction and decision trees preliminary results do not show significant improvement over the naive bayesian modelone problem is that 4 of the sentences in our current annotated material are ambiguous they receive the same feature representation but are classified differently by the annotatorsa possible solution is to find better and more distinctive features we believe that robust higherlevel features like actions and agents are a step in the right directionwe also suspect that a big improvement could be achieved with smaller annotation unitsmany errors come from instances in which one half of a sentence serves one rhetorical purpose the other another as in the following example the current paper shows how to implement this general notion without following krifkas analysis in detail here the first part describes the papers research goal whereas the second expresses a contrastcurrently one target category needs to be associated with the whole sentence as an undesired side effect the contrastlike textual parts are wrongly associated with the aim target categoryif we allowed for a smaller annotation unit this systematic noise in the training data could be removedanother improvement in classification accuracy might be achieved by performing the classification in a cascading waythe system could first perform a classification into ownlike classes otherlike categories and background similar to the way human annotation proceedssubclassification among these classes would then lead to the final sevenway classificationthe work reported in this article was conducted while both authors were in the hcrc language technology group at the university of edinburghthe authors would like to thank jean carletta for her help with the experimental design chris brew for many helpful discussions claire grover and andrei mikheev for advice on the xml implementation and the annotators vasilis karaiskos and anne wilson for their meticulous work and criticism which led to several improvements in the annotation schemethanks also to byron georgantopolous who helped to collect the first version of the corpus and to the four anonymous reviewers
J02-4002
summarizing scientific articles experiments with relevance and rhetorical statusin this article we propose a strategy for the summarization of scientific articles that concentrates on the rhetorical status of statements in an article material for summaries is selected in such a way that summaries can highlight the new contribution of the source article and situate it with respect to earlier workwe provide a gold standard for summaries of this kind consisting of a substantial corpus of conference articles in computational linguistics annotated with human judgments of the rhetorical status and relevance of each sentence in the articleswe present several experiments measuring our judges agreement on these annotationswe also present an algorithm that on the basis of the annotated training material selects content from unseen articles and classifies it into a fixed set of seven rhetorical categoriesthe output of this extraction and classification system can be viewed as a singledocument summary in its own right alternatively it provides starting material for the generation of taskoriented and usertailored summaries designed to give users an overview of a scientific fieldwe examine the problem of summarizing scientific articles using rhetorical analysis of sentenceswe summarize scientific articles by selecting rhetorical elements that are commonly present in scientific abstracts
a systematic comparison of various statistical alignment models we present and compare various methods for computing word alignments using statistical or heuristic models we consider the five alignment models presented in brown della pietra della pietra and mercer the hidden markov alignment model smoothing techniques and refinements these statistical models are compared with two heuristic models based on the dice coefficient we present different methodsfor combining word alignments to perform a symmetrization of directed statistical alignment models as evaluation criterion we use the quality of the resulting viterbi alignment compared to a manually produced reference alignment we evaluate the models on the germanenglish verbmobil task and the frenchenglish hansards task we perform a detailed analysis of various design decisions of our statistical alignment system and evaluate these on training corpora of various sizes an important result is that refined alignment models with a firstorder dependence and a fertility model yield significantly better results than simple heuristic models in the appendix we present an efficient training algorithm for the alignment models presented we present and compare various methods for computing word alignments using statistical or heuristic modelswe consider the five alignment models presented in brown della pietra della pietra and mercer the hidden markov alignment model smoothing techniques and refinementsthese statistical models are compared with two heuristic models based on the dice coefficientwe present different methodsfor combining word alignments to perform a symmetrization of directed statistical alignment modelsas evaluation criterion we use the quality of the resulting viterbi alignment compared to a manually produced reference alignmentwe evaluate the models on the germanenglish verbmobil task and the frenchenglish hansards taskwe perform a detailed analysis of various design decisions of our statistical alignment system and evaluate these on training corpora of various sizesan important result is that refined alignment models with a firstorder dependence and a fertility model yield significantly better results than simple heuristic modelsin the appendix we present an efficient training algorithm for the alignment models presentedwe address in this article the problem of finding the word alignment of a bilingual sentencealigned corpus by using languageindependent statistical methodsthere is a vast literature on this topic and many different systems have been suggested to solve this problemour work follows and extends the methods introduced by brown della pietra della pietra and mercer by using refined statistical models for the translation processthe basic idea of this approach is to develop a model of the translation process with the word alignment as a hidden variable of this process to apply statistical estimation theory to compute the optimal model parameters and to perform alignment search to compute the best word alignmentso far refined statistical alignment models have in general been rarely usedone reason for this is the high complexity of these models which makes them difficult to understand implement and tuneinstead heuristic models are usually usedin heuristic models the word alignments are computed by analyzing some association score metric of a link between a source language word and a target language wordthese models are relatively easy to implementin this article we focus on consistent statistical alignment models suggested in the literature but we also describe a heuristic association metricby providing a detailed description and a systematic evaluation of these alignment models we give the reader various criteria for deciding which model to use for a given taskexample of a word alignment we propose to measure the quality of an alignment model by comparing the quality of the most probable alignment the viterbi alignment with a manually produced reference alignmentthis has the advantage of enabling an automatic evaluation to be performedin addition we shall show that this quality measure is a precise and reliable evaluation criterion that is well suited to guide designing and training statistical alignment modelsthe software used to train the statistical alignment models described in this article is publicly available we follow brown della pietra della pietra and mercer to define alignment as an object for indicating the corresponding words in a parallel textfigure 1 shows an examplevery often it is difficult for a human to judge which words in a given target string correspond to which words in its source stringespecially problematic is the alignment of words within idiomatic expressions free translations and missing function wordsthe problem is that the notion of correspondence between words is subjectiveit is important to keep this in mind in the evaluation of word alignment qualitywe shall deal with this problem in section 5the alignment between two word strings can be quite complicatedoften an alignment includes effects such as reorderings omissions insertions and wordtophrase alignmentstherefore we need a very general representation of alignmentformally we use the following definition for alignment in this articlewe are given a source stringf1j f1 fj fj and a target language string ei1 e1 ei ei that have to be alignedwe define an alignment between the two word strings as a subset of the cartesian product of the word positions that is an modeling the alignment as an arbitrary relation between source and target language positions is quite generalthe development of alignment models that are able to deal with this general representation however is hardtypically the alignment models presented in the literature impose additional constraints on the alignment representationtypically the alignment representation is restricted in a way such that each source word is assigned to exactly one target wordalignment models restricted in this way are similar to the concept of hidden markov models in speech recognitionthe alignment mapping in such models consists of associations j i aj from source position j to target position i ajthe alignment aj1 a1 aj aj may contain alignments aj 0 with the empty word e0 to account for source words that are not aligned with any target wordconstructed in such a way the alignment is not a relation between source and target language positions but only a mapping from source to target language positionsin melamed a further simplification is performed that enforces a onetoone alignment for nonempty wordsthis means that the alignment mapping aj1 must be injective for all word positions aj 0note that many translation phenomena cannot be handled using restricted alignment representations such as this oneespecially methods such as melameds are in principle not able to achieve a 100 recallthe problem can be reduced through corpus preprocessing steps that perform grouping and splitting of wordssome papers report improvements in the alignment quality of statistical methods when linguistic knowledge is used in these methods the linguistic knowledge is used mainly to filter out incorrect alignmentsin this work we shall avoid making explicit assumptions concerning the language usedby avoiding these assumptions we expect our approach to be applicable to almost every language pairthe only assumptions we make are that the parallel text is segmented into aligned sentences and that the sentences are segmented into wordsobviously there are additional implicit assumptions in the models that are needed to obtain a good alignment qualityfor example in languages with a very rich morphology such as finnish a trivial segmentation produces a high number of words that occur only once and every learning method suffers from a significant data sparseness problemthere are numerous applications for word alignments in natural language processingthese applications crucially depend on the quality of the word alignment an obvious application for word alignment methods is the automatic extraction of bilingual lexica and terminology from corpora statistical alignment models are often the basis of singlewordbased statistical machine translation systems in addition these models are the starting point for refined phrasebased statistical or examplebased translation systems in such systems the quality of the machine translation output directly depends on the quality of the initial word alignment another application of word alignments is in the field of word sense disambiguation in yarowsky ngai and wicentowski word alignment is used to transfer text analysis tools such as morphologic analyzers or partofspeech taggers from a language such as english for which many tools already exist to languages for which such resources are scarcein section 2 we review various statistical alignment models and heuristic modelswe present a new statistical alignment model a loglinear combination of the best models of vogel ney and tillmann and brown della pietra della pietra and mercer in section 3 we describe the training of the alignment models and present a new training schedule that yields significantly better resultsin addition we describe how to deal with overfitting deficient models and very small or very large training corporain section 4 we present some heuristic methods for improving alignment quality by performing a symmetrization of word alignmentsin section 5 we describe an evaluation methodology for word alignment methods dealing with the ambiguities associated with the word alignment annotation based on generalized precision and recall measuresin section 6 we present a systematic comparison of the various statistical alignment models with regard to alignment quality and translation qualitywe assess the effect of training corpora of various sizes and the use of a conventional bilingual dictionaryin the literature it is often claimed that the refined alignment models of brown della pietra della pietra and mercer are not suitable for small corpora because of data sparseness problemswe show that this is not the case if these models are parametrized suitablyin the appendix we describe some methods for efficient training of fertilitybased alignment modelswe distinguish between two general approaches to computing word alignments statistical alignment models and heuristic modelsin the following we describe both types of models and compare them from a theoretical viewpointthe notational convention we employ is as followswe use the symbol pr to denote general probability distributions with no specific assumptionsin contrast for modelbased probability distributions we use the generic symbol p211 statistical alignment modelsin statistical machine translation we try to model the translation probability pr which describes the relationship between a source language string fj1 and a target language string ei1in alignment models pr a hidden alignment aj1 is introduced that describes a mapping from a source position j to a target position ajthe relationship between the translation model and the alignment model is given by the alignment aj 1 may contain alignments aj 0 with the empty word e0 to account for source words that are not aligned with any target wordin general the statistical model depends on a set of unknown parameters θ that is learned from training datato express the dependence of the model on the parameter set we use the following notation the art of statistical modeling is to develop specific statistical models that capture the relevant properties of the considered problem domainin our case the statistical alignment model has to describe the relationship between a source language string and a target language string adequatelyto train the unknown parameters θ we are given a parallel training corpus consisting of s sentence pairs s 1 sfor each sentence pair the alignment variable is denoted by a aj1the unknown parameters θ are determined by maximizing the likelihood on the parallel training corpus typically for the kinds of models we describe here the expectation maximization algorithm or some approximate them algorithm is used to perform this maximizationto avoid a common misunderstanding however note that the use of the them algorithm is not essential for the statistical approach but only a useful tool for solving this parameter estimation problemalthough for a given sentence pair there is a large number of alignments we can always find a best alignment the alignment ˆaj1 is also called the viterbi alignment of the sentence pairlater in the article we evaluate the quality of this viterbi alignment by comparing it to a manually produced reference alignmentthe parameters of the statistical alignment models are optimized with respect to a maximumlikelihood criterion which is not necessarily directly related to alignment qualitysuch an approach however requires training with manually defined alignments which is not done in the research presented in this articleexperimental evidence shows that the statistical alignment models using this parameter estimation technique do indeed obtain a good alignment qualityin this paper we use models 1 through 5 described in brown della pietra della pietra and mercer the hidden markov alignment model described in vogel ney and tillmann and och and ney and a new alignment model which we call model 6all these models use a different decomposition of the probability pr212 heuristic modelsconsiderably simpler methods for obtaining word alignments use a function of the similarity between the types of the two languages frequently variations of the dice coefficient are used as this similarity functionfor each sentence pair a matrix including the association scores between every word at every position is then obtained c denotes the cooccurrence count of e and f in the parallel training corpusc and c denote the count of e in the target sentences and the count off in the source sentences respectivelyfrom this association score matrix the word alignment is then obtained by applying suitable heuristicsone method is to choose as alignment aj i for position j the word with the largest association score a refinement of this method is the competitive linking algorithm in a first step the highestranking word position is alignedthen the corresponding row and column are removed from the association score matrixthis procedure is iteratively repeated until every source or target language word is alignedthe advantage of this approach is that indirect associations occur less oftenthe resulting alignment contains only onetoone alignments and typically has a higher precision than the heuristic model defined in equation tage of the heuristic models is their simplicitythey are very easy to implement and understandtherefore variants of the heuristic models described above are widely used in the word alignment literatureone problem with heuristic models is that the use of a specific similarity function seems to be completely arbitrarythe literature contains a large variety of different scoring functions some including empirically adjusted parametersas we show in section 6 the dice coefficient results in a worse alignment quality than the statistical modelsin our view the approach of using statistical alignment models is more coherentthe general principle for coming up with an association score between words results from statistical estimation theory and the parameters of the models are adjusted such that the likelihood of the models on the training corpus is maximized221 hidden markov alignment modelthe alignment model pr can be structured without loss of generality as follows dependence for the alignments aj and that the lexicon probability depends only on the word at position aj later in the article we describe a refinement with a dependence on eaj1 in the alignment modelputting everything together and assuming a simple length model with the alignment probability p and the translation probability pto make the alignment parameters independent of absolute word positions we assume that the alignment probabilities p depend only on the jump width using a set of nonnegative parameters c we can write the alignment probabilities in the form this form ensures that the alignment probabilities satisfy the normalization constraint for each conditioning word position it it 1 ithis model is also referred to as a homogeneous hmm a similar idea was suggested by dagan church and gale in the original formulation of the hidden markov alignment model there is no empty word that generates source words having no directly aligned target wordwe introduce the empty word by extending the hmm network by i empty words e2i i1the target word ei has a corresponding empty word eii we enforce the following constraints on the transitions in the hmm network involving the empty word e01 the parameter p0 is the probability of a transition to the empty word which has to be optimized on heldout datain our experiments we set p0 02whereas the hmm is based on firstorder dependencies p for the alignment distribution models 1 and 2 use zeroorder dependencies p hence the word order does not affect the alignment probabilityto reduce the number of alignment parameters we ignore the dependence on j in the alignment model and use a distribution p instead of pin the following we give a short description of the fertilitybased alignment models of brown della pietra della pietra and mercer a gentle introduction can be found in knight the fertilitybased alignment models have a significantly more complicated structure than the simple models 1 and 2the fertility oi of a word ei in position i is defined as the number of aligned source words the fertilitybased alignment models contain a probability p that the target word e is aligned to o wordsby including this probability it is possible to explicitly describe the fact that for instance the german word ubermorgen produces four english words in particular the fertility o 0 is used for prepositions or articles that have no direct counterpart in the other languageto describe the fertilitybased alignment models in more detail we introduce as an alternative alignment representation the inverted alignments which define a mapping from target to source positions rather than the other way aroundwe allow several positions in the source language to be covered that is we consider alignments b of the form an important constraint for the inverted alignment is that all positions of the source sentence must be covered exactly once that is the bi have to form a partition of the set 1 j jthe number of words oi bi is the fertility of the word eiin the following bik refers to the kth element of bi in ascending orderthe inverted alignments bi0 are a different way to represent normal alignments aj1the set b0 contains the positions of all source words that are aligned with the empty wordfertilitybased alignment models use the following decomposition and assumptions2 as might be seen from this equation we have tacitly assumed that the set b0 of words aligned with the empty word is generated only after the nonempty positions have we obtain an zeroorder alignment model p in model 4 every word is dependent on the previous aligned word and on the word classes of the surrounding wordsfirst we describe the dependence of alignment positionswe have two firstorder alignment models p1 and p1the difference between this model and the firstorder alignment model in the hmm lies in the fact that here we now have a dependence along the jaxis instead of a dependence along the iaxisthe model p1 is used to position the first word of a set bi and the model p1 is used to position the remaining words from left to right the function i i p gives the largest value i 0the symbol bp denotes the average of all elements in bpmodels 3 4 and 5 define the probability p as uniformly distributed for the o0 possibilities given the number of words aligned with the empty word o0 b0assuming a binomial distribution for the number of words aligned with the empty word we obtain the following distribution for b0 the free parameter p1 is associated with the number of words that are aligned with the empty wordthere are o0 ways to order the o0 words produced by the empty word and hence the alignment model of the empty word is nondeficientas we will see in section 32 this creates problems for models 3 and 4therefore we modify models 3 and 4 slightly by replacing 00 in equation with jφ0 as a result of this modification the alignment models for both nonempty words and the empty word are deficient231 model 6as we shall see the alignment models with a firstorder dependence produce significantly better results than the other alignment modelsthe hmm predicts the distance between subsequent source language positions whereas model 4 predicts the distance between subsequent target language positionsthis implies that the hmm makes use of locality in the source language whereas model 4 makes use of locality in the target languagewe expect to achieve better alignment quality by using a model that takes into account both types of dependenciestherefore we combine hmm and model 4 in a loglinear way and call the resulting model model 6 here the interpolation parameter α is employed to weigh model 4 relative to the hidden markov alignment modelin our experiments we use model 4 instead of model 5 as it is significantly more efficient in training and obtains better resultsin general we can perform a loglinear combination of several models pk k1kby the interpolation parameters αk are determined in such a way that the alignment quality on heldout data is optimizedwe use a loglinear combination instead of the simpler linear combination because the values of pr typically differ by orders of magnitude for hmm and model 4in such a case we expect the loglinear combination to be better than a linear combination5 it is straightforward to extend the alignment parameters to include a dependence on the word classes of the surrounding words in the hidden markov alignment model we allow for a dependence of the position aj on the class of the preceding target word c psimilarly we can include dependencies on source and target word classes in models 4 and 5 the categorization of the words into classes is performed automatically by using the statistical learning procedure described in kneser and ney 233 overview of modelsthe main differences among the statistical alignment models lie in the alignment model they employ the fertility model they employ and the presence or absence of deficiencyin addition the models differ with regard to the efficiency of the estep in the them algorithm table 1 offers an overview of the properties of the various alignment modelsoverview of the alignment modelsmodel alignment model fertility model estep deficient model 1 uniform no exact no model 2 zeroorder no exact no hmm firstorder no exact no model 3 zeroorder yes approximative yes model 4 firstorder yes approximative yes model 5 firstorder yes approximative no model 6 firstorder yes approximative yes we now develop an algorithm to compute the viterbi alignment for each alignment modelalthough there exist simple polynomial algorithms for the baseline models 1 and 2 we are unaware of any efficient algorithm for computing the viterbi alignment for the fertilitybased alignment modelsfor model 2 we obtain hence the maximization over the j different alignments decomposes into j maximizations of lexicon probabilitiessimilarly the viterbi alignment for model 2 can be computed with a complexity of ofinding the optimal alignment for the hmm is more complicated than for model 1 or model 2using a dynamic programming approach it is possible to obtain the viterbi alignment for the hmm with a complexity of o for the refined alignment models however namely models 3 4 5 and 6 maximization over all alignments cannot be efficiently carried outthe corresponding search problem is npcomplete for short sentences a possible solution could be an a search algorithm in the work presented here we use a more efficient greedy search algorithm for the best alignment as suggested in brown della pietra della pietra and mercer the basic idea is to compute the viterbi alignment of a simple model this alignment is then iteratively improved with respect to the alignment probability of the refined alignment modelin the appendix we present methods for performing an efficient computation of this pseudoviterbi alignmentin this section we describe our approach to determining the model parameters 0every model has a specific set of free parametersfor example the parameters 0 for model 4 consist of lexicon alignment and fertility parameters to train the model parameters 0 we use a maximumlikelihood approach as described in equation by applying the them algorithm the different models are trained in succession on the same data the final parameter values of a simpler model serve as the starting point for a more complex modelin the estep of model 1 the lexicon parameter counts for one sentence pair are calculated here n is the training corpus count of the sentence pair in the mstep the lexicon parameters are computed similarly the alignment and fertility probabilities can be estimated for all other alignment models when bootstrapping from a simpler model to a more complex model the simpler model is used to weigh the alignments and the counts are accumulated for the parameters of the more complex modelin principle the sum over all j alignments has to be calculated in the estepevaluating this sum by explicitly enumerating all alignments would be infeasiblefortunately models 1 and 2 and hmm have a particularly simple mathematical form such that the them algorithm can be implemented efficiently for the hmm this is referred to as the baumwelch algorithm since we know of no efficient way to avoid the explicit summation over all alignments in the them algorithm in the fertilitybased alignment models the counts are collected only over a subset of promising alignmentsfor models 3 to 6 we perform the count collection only over a small number of good alignmentsto keep the training fast we consider only a small fraction of all alignmentswe compare three different methods for using subsets of varying sizes in section 6 we show that by using the hmm instead of model 2 in bootstrapping the fertilitybased alignment models the alignment quality can be significantly improvedin the appendix we present an efficient training algorithm of the fertilitybased alignment modelswhen using the them algorithm on the standard versions of models 3 and 4 we observe that during the them iterations more and more words are aligned with the empty wordthis results in a poor alignment quality because too many words are aligned to the empty wordthis progressive increase in the number of words aligned with the empty word does not occur when the other alignment models are usedwe believe that this is due to the deficiency of model 3 and model 4the use of the them algorithm guarantees that the likelihood increases for each iterationthis holds for both deficient and nondeficient modelsfor deficient models however as the amount of deficiency in the model is reduced the likelihood increasesin models 3 and 4 as defined in brown della pietra della pietra and mercer the alignment model for nonempty words is deficient but the alignment model for the empty word is nondeficienthence the them algorithm can increase likelihood by simply aligning more and more words with the empty word3 therefore we modify models 3 and 4 slightly such that the empty word also has a deficient alignment modelthe alignment probability is set to p 1j for each source word aligned with the empty wordanother remedy adopted in och and ney is to choose a value for the parameter p1 of the emptyword fertility and keep it fixedto overcome the problem of overfitting on the training data and to enable the models to cope better with rare words we smooth the alignment and fertility probabilitiesfor the alignment probabilities of the hmm we perform an interpolation with a uniform distribution p 1i using an interpolation parameter α for the fertility probabilities we assume that there is a dependence on the number of letters g of e and estimate a fertility distribution p using the them algorithmtypically longer words have a higher fertilityby making this assumption the model can learn that the longer words usually have a higher fertility than shorter wordsusing an interpolation parameter β the fertility distribution is then computed as pβ0 p 0 p here n denotes the frequency of e in the training corpusthis linear interpolation ensures that for frequent words β the specific distribution p dominates and that for rare words β the general distribution p dominatesthe interpolation parameters α and β are determined in such a way that the alignment quality on heldout data is optimizeda conventional bilingual dictionary can be considered an additional knowledge source that can be used in trainingwe assume that the dictionary is a list of word strings the entries for each language can be a single word or an entire phraseto integrate a dictionary into the them algorithm we compare two different methods in this section a is an additional parameter describing the size of the sample that is used to estimate the model pthis count is then used instead of n in the them algorithm as shown in equation as a result only dictionary entries that indeed occur in the training corpus have a large effect in trainingthe motivation behind this is to avoid a deterioration of the alignment as a result of outofdomain dictionary entriesevery entry in the dictionary that does cooccur in the training corpus can be assumed correct and should therefore obtain a high countwe set µ 0in this section we describe various methods for performing a symmetrization of our directed statistical alignment models by applying a heuristic postprocessing step that combines the alignments in both translation directions the baseline alignment model does not allow a source word to be aligned with more than one target wordtherefore lexical correspondences like that of the german compound word zahnarzttermin with the english dentists appointment because problems because a single source word must be mapped to two or more target wordstherefore the resulting viterbi alignment of the standard alignment models has a systematic loss in recallto solve this problem we perform training in both translation directions as a result we obtain two alignments aj1 and bi1 for each pair of sentences in the training corpuslet a1 aj 01 and a2 bi 01 denote the sets of alignments in the two viterbi alignmentsto increase the quality of the alignments we combine a1 and a2 into one alignment matrix a using the following combination methods determinedthe elements of this intersection result from both viterbi alignments and are therefore very reliablethen we extend the alignment a iteratively by adding alignments occurring only in the alignment a1 or in the alignment a2 if neither fj nor ei has an alignment in a or if both of the following conditions hold obviously the intersection of the two alignments yields an alignment consisting of only onetoone alignments with a higher precision and a lower recall than either one separatelythe union of the two alignments yields a higher recall and a lower precision of the combined alignment than either one separatelywhether a higher precision or a higher recall is preferred depends on the final application for which the word alignment is intendedin applications such as statistical machine translation a higher recall is more important so an alignment union would probably be chosenin lexicography applications we might be interested in alignments with a very high precision obtained by performing an alignment intersectionin the following we present an annotation scheme for singlewordbased alignments and a corresponding evaluation criterionit is well known that manually performing a word alignment is a complicated and ambiguous task therefore in performing the alignments for the research presented here we use an annotation scheme that explicitly allows for ambiguous alignmentsthe persons conducting the annotation are asked to specify alignments of two different kinds an s alignment for alignments that are unambiguous and a p alignment for ambiguous alignmentsthe p label is used especially to align words within idiomatic expressions and free translations and missing function words the reference alignment thus obtained may contain manytoone and onetomany relationshipsfigure 2 shows an example of a manually aligned sentence with s and p labelsthe quality of an alignment a aj 0 is then computed by appropriately redefined precision and recall measures and the following alignment error rate which is derived from the wellknown fmeasure a manual alignment with s and p connectionsthese definitions of precision recall and the aer are based on the assumption that a recall error can occur only if an s alignment is not found and a precision error can occur only if the found alignment is not even p the set of sentence pairs for which the manual alignment is produced is randomly selected from the training corpusit should be emphasized that all the training of the models is performed in a completely unsupervised way from this point of view there is no need to have a test corpus separate from the training corpustypically the annotation is performed by two human annotators producing sets s1 p1 s2 p2to increase the quality of the resulting reference alignment the annotators are presented with the mutual errors and asked to improve their alignments where possiblefrom these alignments we finally generate a reference alignment that contains only those s connections on which both annotators agree and all p connections from both annotatorsthis can be accomplished by forming the intersection of the sure alignments and the union of the possible alignments respectivelyby generating the reference alignment in this way we obtain an alignment error rate of 0 percent when we compare the s alignments of every single annotator with the combined reference alignmentwe present in this section results of experiments involving the verbmobil and hansards tasksthe verbmobil task is a speech translation task in the domain of appointment scheduling travel planning and hotel reservationthe bilingual sentences used in training are correct transcriptions of spoken dialogueshowever they include spontaneous speech effects such as hesitations false starts and ungrammatical phrasesthe frenchenglish hansards task consists of the debates in the canadian parliamentthis task has a very large vocabulary of about 100000 french words and 80000 english words4 statistics for the two corpora are shown in tables 2 and 3the number of running words and the vocabularies are based on fullform words and the punctuation markswe produced smaller training corpora by randomly choosing 500 2000 and 8000 sentences from the verbmobil task and 500 8000 and 128000 sentences from the hansards taskfor both tasks we manually aligned a randomly chosen subset of the training corpusfrom this subset of the corpus the first 100 sentences are used as the development corpus to optimize the model parameters that are not trained via the them algorithm the remaining sentences are used as the test corpusthe sequence of models used and the number of training iterations used for each model is referred to in the following as the training schemeour standard training scheme on verbmobil is 15h5334363this notation indicates that five iterations of model 1 five iterations of hmm three iterations of model 3 three iterations of model 4 and three iterations of model 6 are performedon hansards we use 15h10334363this training scheme typically gives very good results and does not lead to overfittingwe use the slightly modified versions of model 3 and model 4 described in section 32 and smooth the fertility and the alignment parametersin the estep of the them algorithm for the fertilitybased alignment models we use the viterbi alignment and its neighborhoodunless stated otherwise no bilingual dictionary is used in trainingtables 4 and 5 compare the alignment quality achieved using various models and training schemesin general we observe that the refined models yield significantly better results than the simple model 1 or dice coefficienttypically the best results are obtained with model 6this holds across a wide range of sizes for the training corpus from an extremely small training corpus of only 500 sentences up to a training corpus of 15 million sentencesthe improvement that results from using a larger training corpus is more significant however if more refined models are usedinterestingly even on a tiny corpus of only 500 sentences alignment error rates under 30 are achieved for all models and the best models have error rates somewhat under 20we observe that the alignment quality obtained with a specific model heavily depends on the training scheme that is used to bootstrap the modelcomparison of alignment error rate for model 1 and dice coefficient we pointed out in section 2 that from a theoretical viewpoint the main advantage of statistical alignment models in comparison to heuristic models is the wellfounded mathematical theory that underlies their parameter estimationtables 4 and 5 show that the statistical alignment models significantly outperform the heuristic dice coefficient and the heuristic dice coefficient with competitive linking even the simple model 1 achieves better results than the two dice coefficient modelsit is instructive to analyze the alignment quality obtained in the them training of model 1figure 3 shows the alignment quality over the iteration numbers of model 1we see that the first iteration of model 1 achieves significantly worse results than the dice coefficient but by only the second iteration model 1 gives better results than the dice coefficientan important result of these experiments is that the hidden markov alignment model achieves significantly better results than model 2we attribute this to the fact that the hmm is a homogeneous firstorder alignment model and such models are able to better represent the locality and monotonicity properties of natural languagesboth models have the important property of allowing an efficient implementation of the them algorithm on the largest verbmobil task the hmm achieves an improvement of 38 over model 2on the largest hansards task the improvement is 87interestingly this advantage continues to hold after bootstrapping more refined modelson model 4 the improvement is 14 and 48 respectivelywe conclude that it is important to bootstrap the refined alignment models with good initial parametersobviously if we use model 2 for bootstrapping we eventually obtain a poor local optimumin tables 6 and 7 we compare the results obtained by using different numbers of alignments in the training of the fertilitybased alignment modelswe compare the three different approaches described in section 3 using only the viterbi alignment using in addition the neighborhood of the viterbi alignment and using the pegged alignmentsto reduce the training time we restrict the number of pegged alignments by using only those in which pr is not much smaller than the probability of the viterbi alignmentthis reduces the training time drasticallyfor the large hansards corpus however there still is an unacceptably large training timetherefore we report the results for only up to 128000 training sentencesthe effect of pegging strongly depends on the quality of the starting point used for training the fertilitybased alignment modelsif we use model 2 as the starting point we observe a significant improvement when we use the neighborhood alignments and the pegged alignmentsif we use only the viterbi alignment the results are significantly worse than using additionally the neighborhood of the viterbi alignmentif we use hmm as the starting point we observe a much smaller effectwe conclude that using more alignments in training is a way to avoid a poor local optimumtable 8 shows the computing time for performing one iteration of the them algorithmusing a larger set of alignments increases the training time for model 4 and model 5 significantlysince using the pegging alignments yields only a moderate improvement in performance all following results are obtained by using the neighborhood of the viterbi alignment without peggingtables 9 and 10 show the effect on the alignment error rate of smoothing the alignment and fertility probabilitieswe observe a significant improvement when we smooth the alignment probabilities and a minor improvement when we smooth the fertility probabilitiesan analysis of the alignments shows that smoothing the fertility probabilities significantly reduces the frequently occurring problem of rare words forming garbage collectors in that they tend to align with too many words in the other language without smoothing we observe early overfitting the alignment error rate increases after the second iteration of hmm as shown in figure 4on the verbmobil task the best alignment error rate is obtained in the second iterationon the hansards task the best alignment error rate is obtained in the sixth iterationin iterations subsequent to the second on the verbmobil task and the sixth on the hansards task the alignment error rate increases significantlywith smoothing of the alignment paramoverfitting on the training data with the hidden markov alignment model using various smoothing parameters eters we obtain a lower alignment error rate overfitting occurs later in the process and its effect is smallertables 11 and 12 show the effects of including a dependence on word classes in the alignment model as described in section 23the word classes are always trained on the same subset of the training corpus as is used for the training of the alignment modelswe observe no significant improvement in performance as a result of including dependence on word classes when a small training corpus is useda possible reason for this lack of improvement is that either the word classes themselves or the resulting large number of alignment parameters cannot be estimated reliably using a small training corpuswhen a large training corpus is used however there is a clear improvement in performance on both the verbmobil and the hansards taskstables 13 and 14 show the effect of using a conventional bilingual dictionary in training on the verbmobil and hansards tasks respectivelywe compare the two methods for using the dictionary described in section 34we observe that the method with a fixed threshold of µ 16 gives the best resultsthe method with a varying µ gives worse results but this method has one fewer parameter to be optimized on heldout dataon small corpora there is an improvement of up to 67 on the verbmobil task and 32 on the hansards task but when a larger training corpus is used the improvements are reduced to 11 and 04 respectivelyinterestingly the amount of the overall improvement contributed by the use of a conventional dictionary is small compared to the improvement achieved through the use of better alignment modelsin this section we compare the results obtained using different translation directions and using the symmetrization methods described in section 4tables 15 and 16 show precision recall and alignment error rate for the last iteration of model 6 for both translation directionsin this experiment we use the conventional dictionary as wellparticularly for the verbmobil task with the language pair germanenglish we observe that for german as the source language the alignment error rate is much higher than for english as source languagea possible reason for this difference in the alignment error rates is that the baseline alignment representation as a vector aj1 does not allow german word compounds to be aligned with more than one english wordthe effect of merging alignments by forming the intersection the union or the refined combination of the viterbi alignments in both translation directions is shown in tables 17 and 18figure 5 shows the corresponding precisionrecall graphsby using the refined combination we can increase precision and recall on the hansards taskthe lowest alignment error rate on the hansards task is obtained by using the intersection methodby forming a union or intersection of the alignments we can obtain very high recall or precision values on both the hansards task and the verbmobil taskalignment models similar to those studied in this article have been used as a starting point for refined phrasebased statistical machine translation systems in och and ney the overall result of the experimental evaluation has been that an improved alignment quality yields an improved subjective quality of the statistical machine translation system as wellin this article we have discussed in detail various statistical and heuristic word alignment models and described various modifications and extensions to models known in the literaturewe have developed a new statistical alignment model that has yielded the best results among all the models we considered in the experiments we have conductedwe have presented two methods for including a conventional bilingual dictionary in training and described heuristic symmetrization algorithms that combine alignments in both translation directions possible between two languages producing an alignment with a higher precision a higher recall or an improved alignment error ratewe have suggested measuring the quality of an alignment model using the quality of the viterbi alignment compared to that achieved in a manually produced reference alignmentthis quality measure has the advantage of automatic evaluationto produce the reference alignment we have used a refined annotation scheme that reduces the problems and ambiguities associated with the manual construction of a word alignmentwe have performed various experiments to assess the effect of different alignment models training schemes and knowledge sourcesthe key results of these experiments are as follows further improvements in alignments are expected to be produced through the adoption of cognates and from statistical alignment models based on word groups rather than single words the use of models that explicitly deal with the hierarchical structures of natural language is very promising we plan to develop structured models for the lexicon alignment and fertility probabilities using maximumentropy modelsthis is expected to allow an easy integration of more dependencies such as in a secondorder alignment model without running into the problem of the number of alignment parameters getting unmanageably largefurthermore it will be important to verify the applicability of the statistical alignment models examined in this article to less similar language pairs such as chineseenglish and japaneseenglishin this appendix we describe some methods for efficient training of fertilitybased alignment modelsthe core idea is to enumerate only a small subset of good alignments in the estep of the them algorithm instead of enumerating all j alignmentsthis small subset of alignments is the set of neighboring alignments of the best alignment that can be found by a greedy search algorithmwe use two operators to transform alignments the move operator mij changes aj i and the swap operator sj1j2 exchanges aj1 and aj2the neighborhood n of an alignment a is then defined as the set of all alignments that differ by one move or one swap from alignment a for one step of the greedy search algorithm we define the following hillclimbing operator which yields for an alignment a the most probable alignment b in the neighborhood n similarly we define a hillclimbing operator for the other alignment modelsa straightforward count collection procedure for a sentence pair following the description in brown della pietra della pietra and mercer is as follows5 increase the counts for p1 a major part of the time in this procedure is spent on calculating the probability pr of an alignment ain general this takes about operationsbrown della pietra della pietra and mercer describe a method for obtaining pr incrementally from pr if alignment a differs only by moves or swaps from alignment athis method results in a constant number of operations that is sufficient to calculate the score of a move or the score of a swapanalyzing the training program reveals that most of the time is spent on the computation of the costs of moves and swapsto reduce the number of operations required in such computation these values are cached in two matriceswe use one matrix for the scores of a move aj i and an additional matrix for the scores of a swap of aj and aj during the hill climbing it is sufficient after making a move or a swap to update only those rows or columns in the matrix that are affected by the move or swapfor example when performing a move aj i it is necessary to similar updates have to be performed after a swapin the count collection it is possible to use the same matrices as obtained in the last hillclimbing stepby restricting in this way the number of matrix entries that need to be updated it is possible to reduce the number of operations in hill climbing by about one order of magnitudethe straightforward algorithm given for performing the count collection has the disadvantage of requiring that all alignments in the neighborhood of alignment a be enumerated explicitlyin addition it is necessary to perform a loop over all targets and a loop over all source positions to update the lexiconalignment and the fertility countsto perform the count collection in an efficient way we use the fact that the alignments in the neighborhood n are very similarthis allows the sharing of many operations in the count collection processto efficiently obtain the alignment and lexicon probability counts we introduce the following auxiliary quantities that use the move and swap matrices that are available after performing the hill climbing described above for the alignment counts c and the lexicon counts c we have to obtain the fertility probability counts and the count for p1 efficiently we introduce the following auxiliary quantities these quantities do not depend on swaps since a swap does not change the fertilities of an alignmentfor the fertility counts we have using the auxiliary quantities a count collection algorithm can be formulated that requires about o2 operationsthis is one order of magnitude faster than the straightforward algorithm described abovein practice we observe that the resulting training is 1020 times fasterthis work has been partially supported as part of the verbmobil project by the german federal ministry of education science research and technology and as part of the eutrans project by the european unionin addition this work has been partially supported by the national science foundation under grant noiis9820687 through the 1999 workshop on language engineering center for language and speech processing johns hopkins universityall work for this paper was done at rwth aachen
J03-1002
a systematic comparison of various statistical alignment modelswe present and compare various methods for computing word alignments using statistical or heuristic modelswe consider the five alignment models presented in brown della pietra della pietra and mercer the hidden markov alignment model smoothing techniques and refinementsthese statistical models are compared with two heuristic models based on the dice coefficientwe present different methods for combining word alignments to perform a symmetrization of directed statistical alignment modelsas evaluation criterion we use the quality of the resulting viterbi alignment compared to a manually produced reference alignmentwe evaluate the models on the germanenglish verbmobil task and the frenchenglish hansards taskwe perform a detailed analysis of various design decisions of our statistical alignment system and evaluate these on training corpora of various sizesan important result is that refined alignment models with a firstorder dependence and a fertility model yield significantly better results than simple heuristic modelsin the appendix we present an efficient training algorithm for the alignment models presentedthe trial and test data had been manually aligned at the word level noting particular pairs of words either as ure or possible alignments
graphbased generation of referring expressions this article describes a new approach to the generation of referring expressions we propose to formalize a scene as a labeled directed graph and describe content selection as a subgraph construction problem cost functions are used to guide the search process and to give preference to some solutions over others the current approach has four main advantages graph structures have been studied extensively and by moving to a graph perspective we get direct access to the many theories and algorithms for dealing with graphs many existing generation algorithms can be reformulated in terms of graphs and this enhances comparison and integration of the various approaches the graph perspective allows us to solve a number of problems that have plagued earlier algorithms for the generation of referring expressions and the combined use of graphs and cost functions paves the way for an integration of rulebased generation techniques with more recent stochastic approaches this article describes a new approach to the generation of referring expressionswe propose to formalize a scene as a labeled directed graph and describe content selection as a subgraph construction problemcost functions are used to guide the search process and to give preference to some solutions over othersthe current approach has four main advantages graph structures have been studied extensively and by moving to a graph perspective we get direct access to the many theories and algorithms for dealing with graphs many existing generation algorithms can be reformulated in terms of graphs and this enhances comparison and integration of the various approaches the graph perspective allows us to solve a number of problems that have plagued earlier algorithms for the generation of referring expressions and the combined use of graphs and cost functions paves the way for an integration of rulebased generation techniques with more recent stochastic approachesthe generation of referring expressions is one of the most common tasks in natural language generation and has been addressed by many researchers in the past two decades including appelt reiter dale and haddock dale dale and reiter horacek stone and webber krahmer and theune bateman and van deemter in this article we present a general graphtheoretic approach to the generation of referring expressionswe propose to formalize a scene as a labeled directed graph and describe the content selection problem as a subgraph construction problemthe graph perspective has four main advantages there are many attractive and wellunderstood algorithms for dealing with graph structures in this article we describe a straightforward branch and bound algorithm for finding the relevant subgraphs in which cost functions are used to guide the search process by defining different cost functions for the graph perspective we can simulate some of the wellknown algorithms for the generation of referring expressions mentioned abovethis facilitates the formal comparison of these algorithms and makes it easier to transfer results from one algorithm to another the graph perspective provides a clean solution for some problems that have plagued earlier algorithmsfor instance the generation of relational expressions is enhanced by the fact that both properties and relations are formalized in the same way namely as edges in a graph the combined use of graphs and cost functions paves the way for a natural integration of traditional rulebased approaches to generating referring expressions and more recent statistical approaches such as langkilde and knight and malouf in a single algorithmthe outline of this article is as followsin section 2 the content selection problem for generating referring expressions is explained and some wellknown solutions to the problem are discussedin section 3 we describe how scenes can be modeled as labeled directed graphs and show how content selection can be formalized as a subgraph construction problemsection 4 contains a sketch of the basic generation algorithm which is illustrated with a worked examplein section 5 various ways to formalize cost functions are discussed and comparedwe end with some concluding remarks and a discussion of future research directions in section 6there are many different algorithms for the generation of referring expressions each with its own objectives some aim at producing the shortest possible description others focus on psychological realism or realistic output the degree of detail in which the various algorithms are described differs considerablysome algorithms are fully formalized and come with explicit characterizations of their complexity others are more conceptual and concentrate on exploring new directions despite such differences most algorithms deal with the same problem definitionthey take as input a single object v for which a referring expression is to be generated and a set of objects from which the target object needs to be distinguished the task of the algorithm is to determine which set of properties is needed to single out the target object v from the distractorsthis is known as the content determination problem for referring expressionson the basis of this set of properties a distinguishing description for v can be generatedmost algorithms do not address the surface realization problem in much detail it is usually assumed that once the content for a referring expression has been determined a standard realizer such as kpml or surge can convert the meaning representation to natural languageconsider the example scene in figure 1in this scene as in any other scene we see a finite domain of entities d with properties p in this particular scene d d1d2d3d4 is the set of entities and p dog cat brown blackwhite large small is the set of propertiesa scene is usually represented as a database listing the properties of each element in d thus d1 dog small brown d2 dog large brown d3 dog large blackwhite d4 cat small brown a simple example scene consisting of some domestic animalsin what is probably the key reference on the topic dale and reiter describe and discuss a number of algorithms for the generation of referring expressionsone of these is the full brevity algorithm this algorithm first tries to generate a distinguishing description for the target object v using one single propertyif this fails it considers all possible combinations of two properties to see if any of these suffices for the generation of a distinguishing description and so onit is readily seen that this algorithm will output the shortest possible description if one existssuppose the full brevity algorithm is used to generate a description for d1 in figure 1there is no single property that distinguishes the target object d1 from the distractors d2 d3 d4but when considering all pairs of properties the algorithm will find that one such pair rules out all distractors namely small and dog the small dog is a successful and minimal distinguishing description for d1dale and reiter point out that the full brevity algorithm is both computationally infeasible and psychologically unrealisticthey offer the incremental algorithm as an alternativethe incremental algorithm considers properties for selection in a predetermined order based on the idea that human speakers and listeners prefer certain kinds of properties when describing objects from a given domainfor instance when discussing domestic animals it seems likely that a human speaker would first describe an animal by its type if that does not suffice first absolute attributes like color are tried followed by relative ones such as sizein sum the list of preferred attributes for our example domain would be essentially the incremental algorithm iterates through this list and for each property it encounters it determines whether adding this property to the properties selected so far would rule out any of the remaining distractorsif so it is included in the list of selected propertiesthere is one exception to this general strategy type information is always included even if it rules out no distractorsthe algorithm stops when all distractors are ruled out or when the end of the list of preferred attributes is reached suppose we apply the incremental algorithm to d1 from figure 1 with as preferred attributesthe type of d1 listed in the database is dogthis property is selected it rules out d4 next we consider the color of d1 the animal is brownthis property rules out d3 and is selectedfinally we consider the size of our target object which is smallthis properly rules out the remaining distractor d2 and hence is included as wellat this point all distractors are ruled out and the set of selected properties is dog brown small which a linguistic realizer might express as the small brown dog this is a successful distinguishing description but not a minimal one the property brown is strictly speaking made redundant by the later inclusion of the property smallsince there is no backtracking in the incremental algorithm however every selected property is realized this aspect is largely responsible for the computational efficiency of the algorithm but dale and reiter also claim that it is psychologically realistic they point out that sometimes people may describe an object as the white bird even though the simpler the bird would have been sufficient even though there are various useful and interesting algorithms for the generation of referring expressions a number of open questions remainrecently there has been an increased interest in statistical approaches to natural language generationfor example malouf has shown that large corpora can be used to determine the order of realization of sequences of prenominal adjectivesit is unclear how such statistical work on generation can be combined with older rulebased work such as the algorithms just discussedin addition many algorithms still have difficulties with the generation of relational descriptions to illustrate the problem consider the scene depicted in figure 2in this scene we again see a finite domain of entities d with certain properties p here d d1 d2 d3 d4 is the set of entities and p dog doghouse small large brown white is the set of propertiesclearly no algorithm can generate a distinguishing description referring to d1 on this basisintuitively d1 can be distinguished from d2 only using its relation to the doghouse d3to facilitate this we extend the scene description with a set of relations are left of right of contain in a few algorithms have been developed that address the issue of relational descriptionsthe earliest is from dale and haddock who offer an extension of the full brevity algorithmthe dale and haddock algorithm has a problem with infinite recursions it may produce descriptions like the dog in the doghouse that contains a dog that is inside a doghouse dale and haddock somewhat ad hoc solve this problem by stipulating that a property or relation may be used only oncekrahmer and theune describe an extension of the incremental algorithm that allows for relational descriptionstheir extension suffers from what may be called the problem of forced incrementality when a first relation fails to rule out all remaining distractors additional relations will be tried incrementallyalthough it could be argued that incremental selection of properties is psychologically plausible it seems less plausible for relationsit is unlikely that someone would describe an a graph representation of the scene in figure 2 object as the dog next to the tree in front of the garage in a situation in which the dog in front of the garage would sufficeas we shall argue the graph perspective provides a clean solution for these problemsin the previous section we saw that a scene can be described in terms of a domain of entities d with properties p and relations r such a scene can be represented as a labeled directed graph let l p you r be the set of labels with p and r disjoint then g is a labeled directed graph where vg c d is the set of vertices and eg c vg x l x vg is the set of labeled directed edges where this can be done without creating confusion the graph subscript is omittedthroughout this article we use the following notationsif g is a graph and e an edge then the extension of g with e denoted as g e is the graph moreover with eg we refer to the set of edges in eg from v to w that is eg e e eg e for l e lthe scene given in figure 2 for example can now be represented by the graph in figure 3this graph models the respective spatial relations between the two chihuahuas between the two doghouses and between each dog and the nearest doghousefor the sake of transparency we have not modeled the relations between the dogs and the distant doghouses note that properties are always modeled as loops some graphs for referring expressions with circles around the intended referent that is as edges that start and end in the same vertexrelations may have different start and end vertices but they do not have to finally note that the graph sometimes contains properties of various levels of specificity this aspect of scene graphs will be further discussed in section 5now the content determination problem for referring expressions can be formulated as a graph construction problemto decide which information to include in a referring expression for an object v e v we construct a connected directed labeled graph over the set of labels l and an arbitrary set of vertices but including v a graph is connected iff there is a path between each pair of verticesinformally we say that a vertex from a graph h refers to a given entity in the scene graph g iff the graph h can be placed over the scene graph g in such a way that the vertex being referred to is placed over the vertex of the given entity in g and each edge from h with label l can be placed over an edge from g with the same labelfurthermore a vertexgraph pair is distinguishing iff it refers to exactly one vertex in the scene graphconsider the three vertexgraph pairs in figure 4 in which circled vertices stand for the intended referentgraph refers to all vertices of the graph in figure 3 graph can refer to both d1 and d2 and graph is distinguishing in that it can refer only to d1note that the graphs might be realized as something next to something else a chihuahua and the dog in the doghouse respectivelyhere we concentrate on the generation of distinguishing vertexgraph pairsformally the notion that a graph h can be placed over another graph g ft eg corresponds to the notion of a subgraph isomorphism h can be placed over g iff there exists a subgraph g of g such that h is isomorphic to gh is isomorphic to g iff there exists a bijection 7r vh vg such that for all vertices vw e vh and all l e l e eh e eg in words the bijective function 7r maps all the vertices in h to corresponding vertices in g in such a way that any edge with label l between vertices v and w in h is matched with an edge with the same label between the g counterparts of v and w when h is isomorphic to some subgraph of g by an isomorphism 7r we write h cr g given a graph h and a vertex v in h and a graph g and a vertex w in g we define that the pair refers to the pair iff h is connected and h cr g and 7rv w furthermore uniquely refers to is distinguishing iff refers to and there is no vertex w in g different from w such that refers to the problem considered in this article can now be formalized as follows given a graph g and a vertex w in g find a pair such that uniquely refers to consider for instance the task of finding a pair that uniquely refers to the vertex labeled d1 in figure 3it is easily seen that there are a number of such pairs three of which are depicted in figure 5we would like to have a mechanism that allows us to give certain solutions to this kind of task preference over other solutionsfor this purpose we shall use cost functionsin general a cost function is a function that assigns to each subgraph of a scene graph a nonnegative numberas we shall see by defining cost functions in different ways we can mimic various algorithms for the generation of referring expressions known from the literaturethe basic decision problem for subgraph isomorphism is known to be npcomplete here we are interested in connected h but unfortunately that restriction does not reduce the theoretical complexitynote that this characterization of the worstcase complexity holds for graphs in which all edges have the same label in that case each edge from h can potentially be matched to any edge from g the bestcase complexity is given when each edge is uniquely labeledin practice the situation will most often be somewhere between these extremesin general we can say that the more diverse the labeling of edges in the graph of a particular scene is the sooner a distinguishing vertexgraph pair will be foundit is worth pointing out that there are various alternatives to full subgraph isomorphism that have a lower complexityfor instance as soon as an upper bound k is defined on the number of edges in a distinguishing graph the problem loses its intractability and becomes solvable in the worst case in polynomial o time where n is number of edges in the graph g restricting the problem in such a way is rather harmless for our current purposes as it prohibits the generation only of distinguishing descriptions with more than k properties and for all practical purposes k can be small defining an upper bound k however does have a disadvantage we lose completeness in particular the algorithm will fail for objects that can be uniquely described only with k 1 edgesof course one could argue that in such cases objects should be distinguished using other means nevertheless it is worthwhile to look for classes of graphs for which the subgraph isomorphism problem can be solved more efficiently without postulating upper boundsfor instance if g and h are planar graphs the problem can be solved in time linear in the number of vertices of g basically a planar graph is one that can be drawn on a plane in such a way that there are no crossing edges in general there is no a priori reason to assume that our scene representations will be planaryet every nonplanar graph can be modified into a closely related planar onewe briefly address planarization of scene graphs in the appendixa final alternative is worth mentioningthe general approach to the problem of subgraph isomorphism detection assumes that both graphs are given onlinefor our current purposes however it may happen that the scene graph is fixed and known beforehand and only the referring graph is unknown and given onlinemessmer and bunke describe a method that converts the known graph into a decision treeat run time the input graph is classified by the decision tree which detects subgraph isomorphismsthe disadvantage of this approach is that the decision tree may contain in the worst case an exponential number of nodesbut the main advantage is that the complexity of the new subgraph isomorphism algorithm is only quadratic in the number of vertices of the input referring graphnote that with this approach we do not lose information from the scene graph nor do we lose completenessin sum the basic approach to subgraph isomorphisms is npcomplete but there exist various reformulations of the problem that can be solved more efficientlydeciding which of these is the most suitable in practice however is beyond the scope of this articlefinally it is worth stressing that the npcompleteness is due to the presence of edges representing relations between different verticesif we restrict the approach to properties testing for subgraph isomorphisms becomes trivialin this section we give a highlevel sketch of the graphbased generation algorithmthe algorithm consists of two main components a subgraph construction algorithm and a subgraph isomorphism testing algorithm for expository reasons we do not address optimization strategies we assume that a scene graph g is giventhe algorithm systematically tries all relevant subgraphs h of the scene graph g by starting with the subgraph containing only the vertex v and expanding it recursively by trying to add edges from g that are adjacent to the subgraph h constructed up to that pointin this way we know that the results will be a connected subgraphwe refer to this set of adjacent edges as the h neighbors in g formally the algorithm returns the cheapest distinguishing subgraph h that refers to v if such a distinguishing graph exists otherwise it returns the undefined null graph l we use cost functions to guide the search process and to give preference to some solutions over othersif h is a subgraph of g then the costs of h denoted as cost can be given by summing over the costs associated with the vertices and edges of h formally in fact this is only one possible way to define a cost functionthe only hard requirement cost functions have to fulfill is monotonicitythat is adding an edge e to a graph g should never result in a graph cheaper than g formally dg c g de e eg cost sw that is the distractor set is restricted to those vertices n in the scene graph g that currently are at least as salient as the target object v for target objects that are linguistically salient this will typically lead to a reduction of the distractor setconsequently distinguishing graphs for these target objects will generally be smaller than those for nonsalient objectsmoreover we will be able to find distinguishing graphs for a salient object v relatively fast since we already have a distinguishing graph and we can use this graph as our initial value of bestgraphone of the important open questions in natural language generation is how the common rulebased approaches to generation can be combined with recent insights from statistical natural language processing the approach proposed in this article makes it possible to combine graph reformulations of wellknown rulebased generation algorithms with stochastic cost functions such a cost function could be derived from a sufficiently large corpusfor instance as a first approximation we could define the costs of adding an edge e in terms of the probability p that e occurs in a distinguishing description thus properties that occur frequently are cheap properties that are relatively rare are expensivein this way we would probably derive that polish owczarek nizinny sheepdog indeed costs more than browneven though this first approximation already has some interesting consequences it is probably not enough to obtain a plausible and useful cost functionfor instance it is unlikely that the cooccurrence of edges is fully independent a husky is likely to be white and a chihuahua is notsuch dependencies are not modeled by the definition given abovein addition properties referring to size such as small and large probably occur more often in a corpus than properties referring to colors such as brown or yellow which at first sight appears to run counter to the earlier observation that speakers generally prefer absolute properties over relative onesthe reason for this however is probably that there are simply fewer ways to describe the size than there are to describe the color of objectssearching for a more sophisticated method of defining stochastic cost functions is therefore an interesting line of future researchin this article we have presented a new approach to the content determination problem for referring expressionswe proposed to model scenes as labeled directed graphs in which objects are represented as vertices and the properties and relations of these objects are represented as edgesthe problem of finding a referring expression for an object is treated as finding a subgraph of the scene graph that is isomorphic to the intended referent but not to any other objectthe theoretical complexity of this reformulation of the content determination problem is npcomplete but there exist various restrictions that have a polynomial complexitywe have described a general and fully implemented algorithm based on the subgraph isomorphism idea consisting of two main functions one that constructs referring graphs and one that tests for subgraph isomorphismscost functions are used to guide the search process and to give preference to some solutions over othersoptimization has not been the focus of this article but we came across various heuristic strategies that would speed up the algorithmfor instance we can try edges in the order determined by the cost function and we can use a greedy algorithm to find a first distinguishing graph quicklyin general one of the advantages of the graph perspective is that many efficient algorithms for dealing with graph structures are knownwe can use those algorithms to formulate more efficient versions of the subgraph construction component and of the subgraph isomorphism testing component the graph perspective has a number of attractive properties by reformulating the content determination problem as a graph construction problem we can directly apply the many techniques and algorithms for dealing with graph structures the use of cost functions allows us to model different search methods each restricting the search space in its own wayby defining cost functions in different ways we can mimic and extend various wellknown algorithms from the literature the generation of relational descriptions is straightforward the problems that plague some other algorithms for the generation of relational descriptions do not arisemoreover the approach to relations proposed here is fully general it applies to all nary relations not just binary ones the use of cost functions paves the way for integrating statistical information directly into the generation processin fact performing experiments with various ways to estimate stochastic cost functions from corpora is one path for future research that we have identifiedbesides looking for graphbased optimizations and performing experiments with stochastic cost functions there are three other lines for future research we would like to mentionthe first concerns the construction of scene graphshow should the decision be made as to which aspects of a scene to represent in the graphnaturally the algorithm can only refer to entities that are modeled in the scene graph but representing every possible object in a single graph will lead to an explosion of edges and verticesperhaps some notion of focus of attention can be used to restrict the scene graphit would also be interesting to look for automatic methods for the construction of scene graphswe might use computer vision algorithms which are often graphbased themselves for this purposefor example bauckhage et al describe an assembly system in which computer vision is used to convert a workspace with various building blocks into a labeled directed scene graphnote that this approach is also able to deal with dynamic scenes it can track changes in the workspace another issue that we have not discussed in much detail is linguistic realizationhow should the information contained in a referring graph be expressed in natural languageso far we have assumed that a distinguishing graph can simply be constructed first and subsequently fed into a realization enginethere may however be certain dependencies between content selection and realization one way to take these dependencies into account would be to reformulate the cost function in such a way that it promotes graphs that can easily be realized and punishes graphs that are more difficult to realizea final aspect of the graph model that deserves further investigation is based on the fact that we can look at a graph such as that in figure 3 as a kripke modelkripke models are used in modeltheoretic semantics for modal logicsthe advantage of looking at graphs such as that in figure 3 as kripke models is that we can use tools from modal logic to reason about these structuresfor example we can reformulate the problem of determining the content of a distinguishing description in terms of hybrid logic as follows iϕ aj in words when we want to refer to vertex i we are looking for that distinguishing formula ϕ that is true of i but not of any j different from ione advantage of this logical perspective is that logical properties that are not covered by most generation algorithms fit in very well with this perspectiveplanar graphs may be relevant for our current purposes since subgraph isomorphism can be tested more efficiently on planar graphs than on arbitrary graphsthere are two ways in which a nonplanar graph g can be turned into a planar one g either the graph g can be pruned or it can be extended a disadvantage of the extension approach is that we lose the intuitive onetoone correspondence between potential target objects and vertices in the scene graph since the additional vertices only serve the purpose of planarizing the graph and do not represent objects in a scenea disadvantage of the pruning approach is that we lose informationthe presence of a cost function however is potentially very useful since it allows us to avoid eliminating comparatively cheap edgeshere for the sake of illustration we briefly describe a weighted greedy pruning algorithm that turns an arbitrary scene graph g with n vertices and m edges into a planar graph g with n vertices and at most m edgeswe start from the graph g where og is the set of looping edges from the scene graph that is og uvevg egnext we order the remaining edges from the scene graph rg egog with respect to their costs the cheapest one comes first the more expensive ones come later in order of increasing expensefor each e e rg we check whether g e is planar if it is e is added to egithe algorithm terminates when rg 0the result is a maximal planar subgraph g of the scene graph g that differs from g only possibly in the deletion of certain relatively expensive nonlooping edgeswe would like to thank dennis van oort and denis gerritsen for their help in the implementation and alexander koller and kees van deemter for some very useful discussionsthanks are also due to paul piwek mariet theune ielka van der sluis and the anonymous reviewers for helpful comments on an earlier version of this article
J03-1003
graphbased generation of referring expressionsthis article describes a new approach to the generation of referring expressionswe propose to formalize a scene as a labeled directed graph and describe content selection as a subgraph construction problemcost functions are used to guide the search process and to give preference to some solutions over othersthe current approach has four main advantages graph structures have been studied extensively and by moving to a graph perspective we get direct access to the many theories and algorithms for dealing with graphs many existing generation algorithms can be reformulated in terms of graphs and this enhances comparison and integration of the various approaches the graph perspective allows us to solve a number of problems that have plagued earlier algorithms for the generation of referring expressions and the combined use of graphs and cost functions paves the way for an integration of rulebased generation techniques with more recent stochastic approachesone of the strengths of the graphbased algorithm is its ability to generate expressions that involve relations between objects and these include spatial ones
word reordering and a dynamic programming beam search algorithm for statistical machine translation in this article we describe an efficient beam search algorithm for statistical machine translation based on dynamic programming the search algorithm uses the translation model presented in brown et al starting from a dpbased solution to the travelingsalesman problem we present a novel technique to restrict the possible word reorderings between source and target language in order to achieve an efficient search algorithm word reordering restrictions especially useful for the translation direction german to english are presented the restrictions are generalized and a set offour parameters to control the word reordering is introduced which then can easily be adopted to new translation directions the beam search procedure has been successfully tested on the uerbmobil task and on the canadian hansards task for the mediumsized uerbmobil task a sentence can be translated in a few seconds only a small number of search errors occur and there is no performance degradation as measured by the word error criterion used in this article ibm t j watson research center rwth aachen in this article we describe an efficient beam search algorithm for statistical machine translation based on dynamic programming the search algorithm uses the translation model presented in brown et al starting from a dpbased solution to the travelingsalesman problem we present a novel technique to restrict the possible word reorderings between source and target language in order to achieve an efficient search algorithmword reordering restrictions especially useful for the translation direction german to english are presentedthe restrictions are generalized and a set offour parameters to control the word reordering is introduced which then can easily be adopted to new translation directionsthe beam search procedure has been successfully tested on the uerbmobil task and on the canadian hansards task for the mediumsized uerbmobil task a sentence can be translated in a few seconds only a small number of search errors occur and there is no performance degradation as measured by the word error criterion used in this articlethis article is about a search procedure for statistical machine translation the task of the search procedure is to find the most likely translation given a source sentence and a set of model parametershere we will use a trigram language model and the translation model presented in brown et al since the number of possible translations of a given source sentence is enormous we must find the best output without actually generating the set of all possible translations instead we would like to focus on the most likely translation hypotheses during the search processfor this purpose we present a datadriven beam search algorithm similar to the one used in speech recognition search algorithms the major difference between the search problem in speech recognition and statistical mt is that mt must take into account the different word order for the source and the target language which does not enter into speech recognitiontillmann vogel ney and zubiaga proposes a dynamic programming based search algorithm for statistical mt that monotonically translates the input sentence from left to rightthe word order difference is dealt with using a suitable preprocessing stepalthough the resulting search procedure is very fast the preprocessing is language specific and requires a lot of manual workcurrently most search algorithms for statistical mt proposed in the literature are based on the a concept here the word reordering can be easily included in the search procedure since the input sentence positions can be processed in any orderthe work presented in berger et al that is based on the a concept however introduces word reordering restrictions in order to reduce the overall search spacethe search procedure presented in this article is based on a dp algorithm to solve the travelingsalesman problem a datadriven beam search approach is presented on the basis of this dpbased algorithmthe cities in the tsp correspond to source positions of the input sentenceby imposing constraints on the possible word reorderings similar to that described in berger et al the dpbased approach becomes more effective when the constraints are applied the number of word reorderings is greatly reducedthe original reordering constraint in berger et al is shown to be a special case of a more general restriction scheme in which the word reordering constraints are expressed in terms of simple combinatorical restrictions on the processed sets of source sentence positions1 a set of four parameters is given to control the word reorderingadditionally a set of four states is introduced to deal with grammatical reordering restrictions the partial hypotheses cover the same set of source sentence positions and the partial hypotheses cover sets c of source sentence positions of equal cardinalitya partial hypothesis is said to cover a set of source sentence positions when exactly the positions in the set have already been processed in the search processto verify the effectiveness of the proposed techniques we report and analyze results for two translation tasks the german to english verbmobil task and french to english canadian hansards taskthe article is structured as followssection 2 gives a short introduction to the translation model used and reports on other approaches to the search problem in statistical mtin section 3 a dpbased search approach is presented along with appropriate pruning techniques that yield an efficient beam search algorithmsection 4 reports and analyzes translation results for the different translation directionsin section 5 we conclude with a discussion of the achieved resultsin this article we use the translation model presented in brown et al and the mathematical notation we use here is taken from that paper as well a source string fj1 f1 fj fj is to be translated into a target string ei1 e1 ei eihere i is the length of the target string and j is the length of the source stringamong all possible target strings we will choose the string with the highest probability as given by bayes architecture of the statistical translation approach based on bayes decision rulepr is the language model of the target language whereas pr l ei1 is the string translation modelthe language model probability is computed using a trigram language modelthe string translation probability pr is modeled using a series of five models of increasing complexity in traininghere the model used for the translation experiments is the ibm4 modelthis model uses the same parameter set as the ibm5 model which in preliminary experiments did not yield better translation resultsthe actual implementation used during the experiments is described in alonaizan et al and in och and ney the argmax operation denotes the search problem the overall architecture of the statistical translation approach is summarized in figure 1in general as shown in this figure there may be additional transformations to make the translation task simpler for the algorithmthe transformations may range from simple word categorization to more complex preprocessing steps that require some parsing of the source stringin this article however we will use only word categorization as an explicit transformation stepin the search procedure both the language and the translation model are applied after the text transformation stepsthe following types of parameters are used for the ibm4 translation model lexicon probabilities we use the lexicon probability p for translating the single target word e as the single source word f a source word f may be translated by the null word e0 a translation probability p is trained along with the regular translation probabilitiesfertilities a single target word e may be aligned to n 01 or more source wordsthis is explicitly modeled by the fertility parameter φ the probability that the target word e is translated by n source words is φthe fertility for the null word is treated specially berger et al describes the extension of a partial hypothesis by a pair of target words where e is not connected to any source word f in this case the socalled spontaneous target word e is accounted for with the fertilityhere the translation probability φ and notranslation probability pclassbased distortion probabilities when covering a source sentence position j we use distortion probabilities that depend on the previously covered source sentence positions in brown et al two types of distortion probabilities are distinguished the leftmost word of a set of source words f aligned to the same target word e is placed and the remaining source words are placedtwo separate distributions are used for these two casesfor placing the head the center function center is used the average position of the source words with which the target word ei_1 is alignedthe distortion probabilities are classbased they depend on the word class j7 of a covered source word f as well as on the word class e of the previously generated target word e the classes are automatically trained when the ibm4 model parameters are used during search an input sentence can be processed one source position at a time in a certain order primarily determined by the distortion probabilitieswe will use the following simplified set of translation model parameters lexicon probabilities p and distortion probabilities phere j is the currently covered input sentence position and j is the previously covered input sentence positionthe input sentence length j is included since we would like to think of the distortion probability as normalized according to jno fertility probabilities or null word probabilities are used thus each source word f is translated as exactly one target word e and each target word e is translated as exactly one source word f the simplified notation will help us to focus on the most relevant details of the dpbased search procedurethe simplified set of parameters leads to an unrealistic assumption about the length of the source and target sentence namely i jduring the translation experiments we will of course not make this assumptionthe implementation details for using the full set of ibm4 model parameters are given in section 392in this section we give a short overview of search procedures used in statistical mt brown et al and brown et al describe a statistical mt system that is based on the same statistical principles as those used in most speech recognition systems berger et al describes the frenchtoenglish candide translation system which uses the translation model proposed in brown et ala detailed description of the decoder used in that system is given in berger et al but has never been published in a paper throughout the search process partial hypotheses are maintained in a set of priority queuesthere is a single priority queue for each subset of covered positions in the source stringin practice the priority queues are initialized only on demand far fewer than the full number of queues possible are actually usedthe priority queues are limited in size and only the 1000 hypotheses with the highest probability are maintainedeach priority queue is assigned a threshold to select the hypotheses that are going to be extended and the process of assigning these thresholds is rather complicateda restriction on the possible word reorderings which is described in section 36 is appliedwang and waibel presents a search algorithm for the ibm2 translation model based on the a concept and multiple stacksan extension of this algorithm is demonstrated in wang and waibel here a reshuffling step on top of the original decoder is used to handle more complex translation models translation approaches that use the ibm2 model parameters but are based on dp are presented in garcıavarea casacuberta and ney and niessen et al an approach based on the hidden markov model alignments as used in speech recognition is presented in tillmann vogel ney and zubiaga and tillmann vogel ney zubiaga and sawaf this approach assumes that source and target language have the same word order and word order differences are dealt with in a preprocessing stagethe work by wu also uses the original ibm model parameters and obtains an efficient search algorithm by restricting the possible word reorderings using the socalled stochastic bracketing transduction grammarthree different decoders for the ibm4 translation model are compared in germann et al the first is a reimplementation of the stackbased decoder described in berger et al the second is a greedy decoder that starts with an approximate solution and then iteratively improves this first rough solutionthe third converts the decoding problem into an integer program and a standard software package for solving ip is usedalthough the last approach is guaranteed to find the optimal solution it is tested only for input sentences of length eight or shorterthis article will present a dpbased beam search decoder for the ibm4 translation modelthe decoder is designed to carry out an almost full search with a small number of search errors and with little performance degradation as measured by the word error criteriona preliminary version of the work presented here was published in tillmann and ney to explicitly describe the word order difference between source and target language brown et al introduced an alignment concept in which a source position j is mapped to exactly one target position i regular alignment example for the translation direction german to englishfor each german source word there is exactly one english target word on the alignment pathan example for this kind of alignment is given in figure 2 in which each german source position j is mapped to an english target position iin brown et al this alignment concept is used for model ibm1 through model ibm5for search purposes we use the inverted alignment concept as introduced in niessen et al and ney et alan inverted alignment is defined as follows inverted alignment i j bi here a target position i is mapped to a source position jthe coverage constraint for an inverted alignment is not expressed by the notation each source position j should be hit exactly once by the path of the inverted alignment bi1 b1 bi bithe advantage of the inverted alignment concept is that we can construct target sentence hypotheses from bottom to top along the positions of the target sentenceusing the inverted alignments in the maximum approximation we rewrite equation to obtain the following search criterion in which we are looking for the most likely target illustration of the transitions in the regular and in the inverted alignment modelthe regular alignment model is used to generate the sentence from left to right the inverted alignment model is used to generate the sentence from bottom to topthe following notation is used ei1 ei2 are the immediate predecessor target words ei is the word to be hypothesized p denotes the trigram language model probability p denotes the lexicon probability for translating the target word ei as source word fbi and p is the distortion probability for covering source position bi after source position bi1note that in equation two products over i are merged into a single product over ithe translation probability p is computed in the maximum approximation using the distortion and the lexicon probabilitiesfinally p is the sentence length model which will be dropped in the following for each source sentence fj1 to be translated we are searching for the unknown mapping that optimizes equation in section 33 we will introduce an auxiliary quantity that can be evaluated recursively using dp to find this unknown mappingwe will explicitly take care of the coverage constraint by introducing a coverage set c of source sentence positions that have already been processedfigure 3 illustrates the concept of the search algorithm using inverted alignments partial hypotheses are constructed from bottom to top along the positions of the target sentencepartial hypotheses of length i1 are extended to obtain partial hypotheses of the length iextending a partial hypothesis means covering a source sentence position j that has not yet been coveredfor a given grid point in the translation lattice the unknown target word sequence can be obtained by tracing back the translation decisions to the partial hypothesis at stage i 1the grid points are defined in section 33in the left part of the figure the regular alignment concept is shown for comparison purposesheld and karp presents a dp approach to solve the tsp an optimization problem that is defined as follows given are a set of cities 1 j and for each pair of cities jj the cost djj 0 for traveling from city j to city jwe are looking for the shortest tour starting and ending in city 1 that visits all cities in the set of cities exactly oncewe are using the notation c for the set of cities since it corresponds to a coverage set of processed source positions in mta straightforward way to find the shortest tour is by trying all possible permutations of the j citiesthe resulting algorithm has a complexity of odp can be used however to find the shortest tour in o which is a much smaller complexity for larger values of jthe approach recursively evaluates the quantity d d costs of the partial tour starting in city 1 ending in city j and visiting all cities in c subsets of cities c of increasing cardinality c are processedthe algorithm shown in table 1 works because not all permutations of cities have to be considered explicitlyduring the computation for a pair the order in which the cities in c have been visited can be ignored only the costs for the best path reaching j has to be storedfor the initialization the costs for starting from city 1 are set d d1k for each k e 2 cthen subsets c of increasing cardinality are processedfinally the cost for the optimal tour is obtained in the secondtolast line of the algorithmthe optimal tour itself can be found using a backpointer array in which the optimal decision for each grid point is storedfigure 4 illustrates the use of the algorithm by showing the supergraph that is searched in the held and karp algorithm for a tsp with j 5 citieswhen traversing the lattice from left to right following the different possibilities a partial path to a node j corresponds to the subset c of all cities on that path together with the last visited illustration of the algorithm by held and karp for a traveling salesman problem with j 5 citiesnot all permutations of cities have to be evaluated explicitlyfor a given subset of cities the order in which the cities have been visited can be ignored city jof all the different paths merging into the node j only the partial path with the smallest cost has to be retained for further computationin this section the held and karp algorithm is applied to statistical mtusing the concept of inverted alignments as introduced in section 31 we explicitly take care of the coverage constraint by introducing a coverage set c of source sentence positions that have already been processedhere the correspondence is according to the fact that each source sentence position has to be covered exactly once fulfilling the coverage constraintthe cities of the more complex translation tsp correspond roughly to triples the notation for which is given belowthe final path output by the translation algorithm will contain exactly one triple for each source position jthe algorithm processes subsets of partial hypotheses with coverage sets c of increasing cardinality c for a trigram language model the partial hypotheses are of the form where e e are the last two target words c is a coverage set for the already covered source positions and j is the last covered positionthe target word sequence that ends in e e is stored as a back pointer to the predecessor partial hypothesis and is not shown in the notationeach distance in the tsp now corresponds to the negative logarithm of the product of the translation distortion and language model probabilitiesthe following input source language string f1 fj fj here j is the previously covered source sentence position and e e are the predecessor wordsthe dp equation is evaluated recursively for each hypothesis the resulting algorithm is depicted in table 2some details concerning the initialization and the finding of the best target language string are presented in section 34 p is the trigram language probability for predicting the sentence boundary symbol the complexity of the algorithm is o where e is the size of the target language vocabularythe above search space is still too large to translate even a mediumlength input sentenceon the other hand only very restricted reorderings are necessary for example for the translation direction german to english the word order difference is mostly restricted to the german verb groupthe approach presented here assumes a mostly monotonic traversal of the source sentence positions from left to right2 a small number of positions may be processed sooner than they would be in that monotonic traversaleach source position then generates a certain number of target wordsthe restrictions are fully formalized in section 35a typical situation is shown in figure 5when translating the sentence monotonically from left to right the translation of the german finite verb kann which is the left verbal brace in this case is skipped until the german noun phrase mein kollege which is the subject of the sentence is translatedthen the right verbal brace is translated word reordering for the translation direction german to english the reordering is restricted to the german verb groupthe infinitive besuchen and the negation particle nichtthe following restrictions are used one position in the source sentence may be skipped for a distance of up to l 4 source positions and up to two source positions may be moved for a distance of at most are 10 source positions to formalize the approach we introduce four verb group states s order in which the german source positions are covered for the germantoenglish reordering example given in figure 5the states move and skip both allow a set of upcoming words to be processed sooner than would be the case in the monotonic traversalthe state initial is entered whenever there are no uncovered positions to the left of the rightmost covered positionthe sequence of states needed to carry out the word reordering example in figure 5 is given in figure 6the 13 source sentence words are processed in the order showna formal specification of the state transitions is given in section 35any number of consecutive german verb phrases in a sentence can be processed by the algorithmthe finitestate control presented here is obtained from a simple analysis of the germantoenglish word reordering problem and is not estimated from the training datait can be viewed as an extension of the ibm4 model distortion probabilitiesusing the above states we define partial hypothesis extensions of the following type not only the coverage set c and the positions j j but also the verb group states s s are taken into accountfor the sake of brevity we have omitted the target language words e e in the notation of the partial hypothesis extensionfor each extension an uncovered position is added to the coverage set c of the partial hypothesis and the verb group state s may changea more detailed description of the partial hypothesis extension for a certain state s is given in the next section in a more general contextcovering the first uncovered position in the source sentence we use the lantillmann and ney dp beam search for statistical mt guage model probability phere is the sentence boundary symbol which is thought to be at position 0 in the target sentencethe search starts in the hypothesis denotes the empty set where no source sentence position is coveredthe following recursive equation is evaluated the search ends in the hypotheses the last covered position may be in the range j jl j because some source positions may have been skipped at the end of the input sentence1 j denotes a coverage set including all positions from position 1 to position jthe final translation probability qf is where p denotes the trigram language model which predicts the sentence boundary at the end of the target sentenceqf can be obtained using an algorithm very similar to the one given in table 2the complexity of the verb group reordering for the translation direction german to english is o as shown in tillmann for the translation direction english to german the word reordering can be restricted in a similar way as for the translation direction german to englishagain the word order difference between the two languages is mainly due to the german verb groupduring the translation process the english verb group is decomposed as shown in figure 7when the sentence is translated monotonically from left to right the translation of the english finite verb can is moved and it is translated as the german left verbal brace before the english noun phrase my colleague which is the subject of the sentencethe translations of the infinitive visit and of the negation particle not are skipped until later in the translation processfor this translation direction the translation of one source sentence position may be moved for a distance of up to l 4 source positions and the translation of up to two source positions may be skipped for a distance of up to are 10 source positions thus the role of the skipping and the moving are simply reversed with respect to their roles in germantoenglish translationfor the example translation in figure 7 the order in which the source sentence positions are covered is given in figure 8we generalize the two approaches for the different translation directions as follows in both approaches we assume that the source sentence is mainly processed monotonicallya small number of upcoming source sentence positions may be processed earlier than they would be in the monotonic traversal the states skip and move are used as explained in the preceding sectionthe positions to be processed outside the monotonic traversal are restricted as follows word reordering for the translation direction english to german the reordering is restricted to the english verb groupthese restrictions will be fully formalized later in this sectionin the state move some source sentence positions are moved from later in the sentence to earlierafter source sentence positions are moved they are marked and the translation of the sentence is continued monotonically keeping track of the positions already coveredto formalize the approach we introduce four reordering states s to formalize the approach the following notation is introduced order in which the english source positions are covered for the englishtogerman reordering example given in figure 7 rmax is the rightmost covered and lmin is the leftmost uncovered source position you is the number of skipped positions and m is the number of moved positionsthe function card returns the cardinality of a set of source positionsthe function w describes the window size in which the word reordering takes placea procedural description for the computation of the set of successor hypotheses for a given partial hypothesis is given in table 3there are restrictions on the possible successor states a partial hypothesis in state skip cannot be expanded into a partial hypothesis in state move and vice versaif the coverage set for the newly generated hypothesis covers a contiguous initial block of source positions the state initial is enteredno other state s is considered as a successor state in this case the set of successor hypotheses succ by which to extend the partial hypothesis is computed using the constraints defined by the values for numskip widthskip nummove and widthmove as explained in the appendixin particular a source position k is discarded for extension if the window restrictions are violatedwithin the restrictions all possible successors are computedit can be observed that the set of successors as computed in table 3 is never emptyprocedural description to compute the set succ of successor hypotheses by which to extend a partial hypothesis input partial hypothesis output set succ of successor hypotheses there is an asymmetry between the two reordering states move and skip while in state move the algorithm is not allowed to cover the position lminit must first enter the state cover to do soin contrast for the state skip the newly generated hypothesis always remains in the state skip this is motivated by the word reordering for the german verb groupafter the right verbal brace has been processed no source words may be moved into the verbal brace from later in the sentencethere is a redundancy in the reorderings the same reordering might be carried out using either the state skip or move especially if widthskip and widthmove are about the samethe additional computational burden is alleviated somewhat by the fact that the pruning as introduced in section 38 does not distinguish hypotheses according to the statesa complexity analysis for different reordering constraints is given in tillmann we now compare the new word reordering approach with the approach used in berger et al in the approach presented in this article source sentence words are aligned with hypothesized target sentence words3 when a source sentence word is aligned we say its position is coveredduring the search process a partial hypothesis is extended by choosing an uncovered source sentence position and this choice is restrictedonly one of the first n uncovered positions in a coverage set may be chosen where n is set to 4this choice is illustrated in figure 9in the figure covered positions are marked by a filled circle and uncovered positions are marked by an unfilled circlepositions that may be covered next are marked by an unfilled squarethe restrictions for a coverage set c can be expressed in terms of the expression you defined in the previous section the number of uncovered source sentence positions to the left of the rightmost covered positiondemanding you 3 we obtain the s3 restriction illustration of the ibmstyle reordering constraint introduced in the appendixan upper bound of o for the word reordering complexity is given in tillmann in order to demonstrate the complexity of the proposed reordering constraints we have modified our translation algorithm to show for the different reordering constraints the overall number of successor states generated by the algorithm given in table 3the number of successors shown in figure 10 is counted for a pseudotranslation task in which a pseudosource word x is translated into the identically pseudo target word xno actual optimization is carried out the total number of successors is simply counted as the algorithm proceeds through subsets of increasing cardinalitythe complexity differences for the different reordering constraints result from the different number of coverage subsets c and corresponding reordering states s allowedfor the different reordering constraints we obtain the following results number of processed arcs for the pseudotranslation task as a function of the input sentence length j the complexity for the four different reordering constraints mon ge eg and s3 is giventhe complexity of the s3 constraint is close to j4to speed up the search a beam search strategy is usedthere is a direct analogy to the datadriven search organization used in continuousspeech recognition the full dp search algorithm proceeds cardinalitysynchronously over subsets of source sentence positions of increasing cardinalityusing the beam search concept the search can be focused on the most likely hypothesesthe hypotheses qe are distinguished according to the coverage set c with two kinds of pruning based on this coverage set after the pruning is carried out we retain for further consideration only hypotheses with a probability close to the maximum probabilitythe number of surviving hypotheses is controlled by four kinds of thresholds for the coverage and the cardinality pruning the probability qe is adjusted to take into account the uncovered source sentence positions c 1 jcto make this adjustment for a source word f at an uncovered source position we precompute an upper bound p for the product of language model and lexicon probability the above optimization is carried out only over the word trigrams that have actually been seen in the training dataadditionally the observation pruning described below is applied to the possible translations e of a source word f the upper bound is used in the beam search concept to increase the comparability between hypotheses covering different coverage setseven more benefit from the upper bound p can be expected if the distortion and the fertility probabilities are taken into account using the definition of p the following modified probability qe is used to replace the original probability qe and all pruning is applied to the new probability for the translation experiments equation is recursively evaluated over subsets of source positions of equal cardinalityfor reasons of brevity we omit the state description s in equation since no separate pruning according to the states s is carried outthe set of surviving hypotheses for each cardinality c is referred to as the beamthe size of the beam for cardinality c depends on the ambiguity of the translation task for that cardinalityto fully exploit the speedup of the dp beam search the search space is dynamically constructed as described in tillmann vogel ney zubiaga and sawaf rather than using a static search spaceto carry out the pruning the maximum probabilities with respect to each coverage set c and cardinality c are computed the coverage pruning threshold tc and the cardinality pruning threshold tc are used to prune active hypotheseswe call this pruning translation pruninghypotheses are pruned according to their translation probability for the translation experiments presented in section 4 the negative logarithms of the actual pruning thresholds tc and tc are reporteda hypothesis is discarded if its probability is below the corresponding thresholdfor the current experiments the coverage and the cardinality threshold are constant for different coverage sets c and cardinalities c together with the translation pruning histogram pruning is carried out the overall number n of active hypotheses for the coverage set c and the overall number n of active hypotheses for all subsets of a given cardinality may not exceed a given number again different numbers are used for coverage and cardinality pruningthe coverage histogram pruning is denoted by nc and the cardinality histogram pruning is denoted by nc if the numbers of active hypotheses for each coverage set c and cardinality c n and n exceed the above thresholds only the partial hypotheses with the highest translation probabilities are retained the third type of pruning conducted observation pruning the number of words that may be produced by a source word f is limitedfor each source language word f the list of its possible translations e is sorted according to where puni is the unigram probability of the target language word e only the best no target words e are hypothesized during the search process the similarities are given mainly in the following input sentences are processed mainly monotonically from left to rightthe algorithm works cardinalitysynchronously meaning that all the hypotheses that are processed cover subsets of source sentence positions of equal cardinality c table 4 shows a twolist implementation of the search algorithm given in table 2 in which the beam pruning is includedthe two lists are referred to as s and snew s is the list of hypotheses that are currently expanded and snew is the list of newly twolist implementation of a dpbased search algorithm for statistical mt input source string f1 fj fj initial hypothesis lists s generated hypothesesthe search procedure processes subsets of covered source sentence positions of increasing cardinalitythe search starts with s where denotes the sentence start symbol for the immediate two predecessor words and 0 denotes the empty coverage set in which no source position is covered yetfor the initial search state the position last covered is set to 0a set s of active hypotheses is expanded for each cardinality c using lexicon model language model and distortion model probabilitiesthe newly generated hypotheses are added to the hypothesis set snew for hypotheses that are not distinguished according to our dp approach only the best partial hypothesis is retained for further considerationthis socalled recombination is implemented as a set of simple lookup and update operations on the set snew of partial hypothesesduring the partial hypothesis extensions an anticipated pruning is carried out hypotheses are discarded before they are considered for recombination and are never added to snewafter the extension of all partial hypotheses in s a pruning step is carried out for the hypotheses in the newly generated set snewthe pruning is based on two simple sorting steps on the list of partial hypotheses snewfirst the partial hypotheses are sorted according to their translation scores cardinality pruning can then be carried out simply by running down the list of hypotheses starting with the maximumprobability hypothesis and applying the cardinality thresholdsthen the partial hypotheses are sorted a second time according to their coverage set c and their translation scoreafter this sorting step all partial hypotheses that cover the same subset of source sentence positions are located in consecutive fragments in the overall list of partial hypothesescoverage pruning is carried out in a single run over the list of partial hypotheses for each fragment corresponding to the same coverage set c the coverage pruning threshold is appliedthe partial hypotheses that survive the two pruning stages are then written into the socalled bookkeeping array for the next expansion step the set s is set to the newly generated list of hypothesesfinally the target translation is constructed from the bookkeeping array392 details for ibm4 modelin this section we outline how the dpbased beam search approach can be carried out using the full set of ibm4 parametersfirst the full set of ibm4 parameters does not make the simplifying assumption given in section 31 namely that source and target sentences are of equal length either a target word e may be aligned with several source words or a single source word may produce zero one or two target words as described in berger et al or bothzero target words are generated if f is aligned to the null word e0generating a single target word e is the regular casetwo target words may be generatedthe costs for generating the target word e are given by its fertility φ and the language model probability no lexicon probability is usedduring the experiments we restrict ourselves to triples of target words actually seen in the training datathis approach is used for the frenchtoenglish translation experiments presented in this articleanother approach for mapping a single source language word to several target language words involves preprocessing by the wordjoining algorithm given in tillmann which is similar to the approach presented in och tillmann and ney target words are joined during a training phase and several joined target language words are dealt with as a new lexicon entrythis approach is used for the germantoenglish translation experiments presented in this articlein order to deal with the ibm4 fertility parameters within the dpbased concept we adopt the distinction between open and closed hypotheses given in berger et al a hypothesis is said to be open if it is to be aligned with more source positions than it currently is otherwise it is called closedthe difference between open and closed is used to process the input sentence one position a time the word reordering restrictions and the beam search pruning techniques are directly carried over to the full set of ibm4 parameters since they are based on restrictions on the coverage vectors c onlyto ensure its correctness the implementation was tested by carrying out forced alignments on 500 germantoenglish training sentence pairsin a forced alignment the source sentence fj1 and the target sentence ei1 are kept fixed and a full search without reordering restrictions is carried out only over the unknown alignment aj1the language model probability is divided out and the resulting probability is compared to the viterbi probability as obtained by the training procedurefor 499 training sentences the viterbi alignment probability as obtained by the forcedalignment search was exactly the same as the one produced by the training procedurein one case the forcedalignment search did obtain a better viterbi probability than the training proceduretranslation experiments are carried out for the translation directions german to english and english to german and for the translation directions french to english and english to french section 41 reports on the performance measures usedsection 42 shows translation results for the verbmobil tasksections 421 and 422 describe that task and the preprocessing steps appliedin sections 423 through 425 the efficiency of the beam search pruning techniques is shown for germantoenglish translation as the most detailed experiments are conducted for that directionsection 426 gives translation results for the translation direction english to germanin section 43 translation results for the canadian hansards task are reportedto measure the performance of the translation methods we use three types of automatic and easytouse measures of the translation errorsadditionally a subjective evaluation involving human judges is carried out the following evaluation criteria are employed performed to convert the generated string into the reference target stringthis performance criterion is widely used in speech recognitionthe minimum is computed using a dp algorithm and is typically referred to as edit or levenshtein distanceevaluation measures subjective judgments by test persons are carried out the following scale for the error count per sentence is used in these subjective evaluations each translated sentence is judged by a human examiner according to the above error scale several human judges may be involved in judging the same translated sentencesubjective evaluation is carried out only for the verbmobil test147 test set421 the task and the corpusthe translation system is tested on the verbmobil task in that task the goal is the translation of spontaneous speech in facetoface situations for an appointment scheduling domainwe carry out experiments for both translation directions german to english and english to germanalthough the verbmobil task is still a limiteddomain task it is rather difficult in terms of vocabulary size namely about 5000 words or more for each of the two languages second the syntactic structures of the sentences are rather unrestrictedalthough the ultimate goal of the verbmobil project is the translation of spoken language the input used for the translation experiments reported on in this article is mainly the correct orthographic transcription of the spoken sentencesthus the effects of spontaneous speech are present in the corpus the effect of speech recognition errors however is not coveredthe corpus consists of 58073 training pairs its characteristics are given in table 5for the translation experiments a trigram language model with a perplexity of 281 is usedthe following two test corpora are used for the translation experiments test331 this test set consists of 331 test sentencesonly automatic evaluation is carried out on this test corpus the wer and the mwer are computedfor each test sentence in the source language there is a range of acceptable reference translations provided by a human translator who is asked to produce wordtoword translations wherever it is possiblepart of the reference sentences are obtained by correcting automatic translations of the test sentences that are produced using the approach presented in this article with different reordering constraintsthe other part is produced from the source sentences without looking at any of their translationsthe test331 test set is used as heldout data for parameter optimization furthermore the beam search experiments in which the effect of the different pruning thresholds is demonstrated are carried out on the test331 test settest147 the second separate test set consists of 147 test sentencestranslation results are given in terms of mwer and sserno parameter optimization is carried out on the test147 test set the parameter values as obtained from the experiments on the test331 test set are used422 preprocessing stepsto improve the translation performance the following preprocessing steps are carried out categorization we use some categorization which consists of replacing a single word by a categorythe only words that are replaced by a category label are proper nouns denoting german citiesusing the new labeled corpus all probability models are trained anewto produce translations in the normal language the categories are translated by rule and are inserted into the target sentenceword joining target language words are joined using a method similar to the one described in och tillmann and ney words are joined to handle cases like the german compound noun zahnarzttermin for the english dentists appointment because a single word has to be mapped to two or more target wordsthe word joining is applied only to the target language words the source language sentences remain unchangedduring the search process several joined target language words may be generated by a single source language wordmanual lexicon to account for unseen words in the test sentences and to obtain a greater number of focused translation probabilities p we use a bilingual germanenglish dictionaryfor each word e in the target vocabulary we create a list of source translations f according to this dictionarythe translation probability pdic for the dictionary entry is defined as where ne is the number of source words listed as translations of the target word e the dictionary probability pdic is linearly combined with the automatically trained translation probabilities paut to obtain smoothed probabilities p p pdic a paut for the translation experiments the value of the interpolation parameter is fixed at a 05423 effect of the scaling factorsin speech recognition in which bayes decision rule is applied a language model scaling factor αlm is used a typical value is αlm 15this scaling factor is employed because the language model probabilities are more reliably estimated than the acoustic probabilitiesfollowing this use of a language model scaling factor in speech recognition such a factor is introduced into statistical mt toothe optimization criterion in equation is modified as follows where p is the language model probability of the target language sentencein the experiments presented here a trigram language model is used to compute pthe effect of the language model scaling factor αlm is studied on the test331 test seta minimum mwer is obtained for αlm 08 as reported in tillmann unlike in speech recognition the translation model probabilities seem to be estimated as reliably as the language model probabilities in statistical mta second scaling factor αd is introduced for the distortion model probabilities pa minimum mwer is obtained for αd 04 as reported in tillmann the wer and mwer on the test331 test set increase significantly if no distortion probability is used for the case αd 00the benefit of a distortion probability scaling factor of αd 04 comes from the fact that otherwise a low distortion probability might suppress longdistant word reordering that is important for germantoenglish verb group reorderingthe setting αlm 08 and αd 04 is used for all subsequent translation results mwer and sser on the test147 test set as a function of three reordering constraints mon ge and s3 the computing time is given in terms of central processing unit time per sentence for the sser it turns out that restricting the word reordering such that it may not cross punctuation marks improves translation performance significantlythe average length of the sentence fragments that are separated by punctuation marks is rather small 45 words per fragmenta coverage pruning threshold of tc 50 and an observation pruning of no 50 are applied during the experiments4 no other type of pruning is used5 the mon constraint performs worst in terms of both mwer and sserthe computing time is small since no reordering is carried outconstraints ge and s3 perform nearly identically in terms of both mwer and sserthe ge constraint however works about three times as fast as the s3 constrainttable 7 shows example translations obtained under the three different reordering constraintsagain the mon reordering constraint performs worstin the second and third translation examples the s3 word reordering constraint performs worse than the ge reordering constraint since it cannot take the word reordering due to the german verb group properly into accountthe german finite verbs bin and konnten are too far away from the personal pronouns ich and sie to be reordered properlyin the last example the less restrictive s3 reordering constraint leads to a better translation the ge translation is still acceptable though beam search pruning is demonstratedtranslation results on the test331 test set are presented to evaluate the effectiveness of the pruning techniques6 the quality of the search algorithm with respect to the ge and s3 reordering constraints is evaluated using two criteria 1the number of search errors for a certain combination of pruning thresholds is counteda search error occurs for a test sentence if the final translation probability qf for a candidate translation ei1 as given in equation is smaller than a reference probability for that test sentencewe will compute reference probabilities two ways as explained beloweffect of the coverage pruning threshold tc on the number of search errors and mwer on the test331 test set a cardinality histogram pruning of 200000 is applied to restrict the maximum overall size of the search spacethe negative logarithm of tc is reported leads to a higher word error rate since the optimal path through the translation lattice is missed resulting in translation errorstwo automatically generated reference probabilities are usedthese probabilities are computed separately for the reordering constraints ge and s3 qref a forced alignment is carried out between each of the test sentences and its corresponding reference translation only a single reference translation for each test sentence is usedthe probability obtained for the reference translation is denoted by qrefqf a translation is carried out with conservatively large pruning thresholds yielding a translation close to the one with the maximum translation probabilitythe translation probability for that translation is denoted by qffirst in a series of experiments we study the effect of the coverage and cardinality pruning for the reordering constraints ge and s3the experiments are carried out on two different pruning dimensions both tables use an observation pruning of no 50the effect of the coverage pruning threshold tc is demonstrated in table 8for the translation experiments reported in this table the cardinality pruning threshold is set to t oo thus no comparison between partial hypotheses that do not cover the same set c of source sentence positions is carried outto restrict the overall size of the search space in terms of cpu time and memory requirements a cardinality pruning of n 200000 is appliedas can be seen from table 8 mwer and the number of search errors decrease significantly as the coverage pruning threshold tc increasesfor the ge reordering constraint mwer decreases from 735 to 249for a coverage pruning threshold tc 50 mwer remains nearly constant at 250 although search errors still occurfor the s3 reordering constraint mwer decreases from 700 to 283the largest coverage threshold tested for the s3 constraint is tc 50 since for larger threshold values tc the search procedure cannot be carried out because of memory and time restrictionsthe number of search errors is reduced as the coverage pruning threshold is increasedit turns out to be difficult to verify search errors by looking at the reference translation probabilities qref alonethe translation with the maximum translation probability seems to be quite narrowly definedthe coverage pruning is more effective for the ge constraint than for the s3 constraint since the overall search space for the ge reordering is smallertable 9 shows the effect of the cardinality pruning threshold t on mwer when no coverage pruning is carried out the cardinality threshold t has a strong effect on mwer which decreases significantly as the cardinality threshold t increasesfor the ge reordering constraint mwer decreases from 485 to 249 for the s3 reordering constraint mwer decreases from 514 to 282for the coverage threshold t 150 the ge constraint works about four times as fast as the s3 constraint since the overall search space for the s3 constraint is much largeralthough the overall search space is much larger for the s3 constraint for smaller values of the coverage threshold tc 50 the s3 constraint works as fast as the ge constraint or even faster because only a very small portion of the overall search space is searched for small values of the cardinality pruning threshold tthere is some computational overhead in expanding a partial hypothesis for the ge constraint because the finitestate control has to be handledno results are obtained for the s3 constraint and the coverage threshold t 175 because of memory restrictionsthe number of search errors is reduced as the cardinality pruning threshold is increasedagain it is difficult to verify search errors by looking at the reference translation probabilities aloneboth coverage and cardinality pruning are more efficient for the ge reordering constraint than for the s3 reordering constraintfor the s3 constraint no translation results are obtained for a coverage threshold t 50 without cardinality pruning applied because of memory and computing time restrictionsfor the ge constraint virtually a full search can be carried out where only observation pruning is applied identical target translations and translation probabilities are produced for the hypothesis files for the two cases tc 100 t and tc t 150since the pruning is carried out independently on two different pruning dimensions no search errors will occur if the thresholds are further increasedtable 10 shows the effect of the observation pruning parameter no on mwer for the reordering constraint ge mwer is significantly reduced by hypothesizing up to the best 50 target words a for a source language word f mwer increases from 249 to 293 when the number of hypothesized words is decreased to only a single wordtable 11 demonstrates the effect of the combination of the coverage pruning threshold tc 50 and the cardinality pruning threshold t 125 where the actual values are found in informal experiments in a typical setting of the two parameters t should be at least twice as big as tcfor the ge reordering constraint the average computing time is about seven seconds per sentence without any loss in translation performance as measured in terms of mwerfor the s3 reordering constraint the average computing time per sentence is 27 secondsagain the combination of coverage and cardinality pruning works more efficiently for the ge constraintthe memory requirement for the algorithm is about 100 mb426 englishtogerman translation experimentsa series of translation experiments for the translation direction english to german are also carried outthe results given demonstration of the combination of the two pruning thresholds tc 50 and t 125 to speed up the search process for the two reordering constraints ge and s3 the translation performance is shown in terms of mwer on the test331 test set in terms of wer and per are shown in table 12for the englishtogerman translation direction a single reference translation for each test sentence is used to carry out the automatic evaluationthe translation task for the translation direction english to german is more difficult than for the translation direction german to english the trigram language model perplexity increases from 383 to 682 on the test331 test set as can be seen in table 5no parameter optimization is carried out for this translation direction the parameter settings are carried over from the results obtained in table 11the word error rates for the translation direction english to german are significantly higher than those for the translation direction german to englishthere are several reasons for this german vocabulary and perplexity are significantly larger than those for english and only a single reference translation per test sentence is available for englishtogerman translationthere is only a very small difference in terms of word error rates for the reordering constraints eg and s3 in particular wer is 701 for boththe reordering constraint mon performs slightly worse wer increases to 706 and per increases to 570table 13 shows translation examples for the translation direction english to germanthe mon constraint performs worst there is no significant difference in quality of translations produced under the eg and the s3 constraints431 the task and the corpusthe second corpus on which we perform translation experiments is the hansard corpusby law the proceedings of the canadian parliament are recorded in both french and english the remarks of the parliament members are written down in whichever of the two languages they usethey are then translated into the other language to produce complete sets of the proceedings one in french and the other in englishthe resulting bilingual data have been sentencealigned using statistical methods originally about three million sentences were selectedhere we use a subset of the original training data the details regarding this subset are given in table 14the hansards corpus presents by far a more difficult task than the verbmobil corpus in terms of vocabulary size and number of training sentencesthe training and test sentences are less restrictive than for the verbmobil taskfor the translation experiments on the hansards corpus no word joining is carried outtwo target words can be produced by a single source word as described in section 392432 translation resultsas can be seen in table 15 for the translation direction french to english and in table 16 for the translation direction english to french the word error rates are rather high compared to those for the verbmobil taskthe reason for the higher error rates is that as noted in the previous section the hansards task is by far less restrictive than the verbmobil task and the vocabulary size is much largerthere is only a slight difference in performance between the mon and the s3 reordering constraints on the hansards taskthe computation time is also rather high compared to the verbmobil task for the s3 constraint the average translation time is about 3 minutes per sentence for the translation direction english to french and about 10 minutes per sentence for the translation direction french to englishthe following parameter setting is used for the experiment conducted here tc 50 t 100 nc 250 and to 12no cardinality histogram pruning is carried outas for the germantoenglish translation experiments word reordering is restricted so that it may not cross punctuation boundariesthe resulting fragment lengths are much larger for the translation direction english to french and still larger for the translation direction french to english when compared to the fragment lengths for the translation direction german to english hence the high cpu timesin an additional experiment for the translation direction french to english and the reordering constraint s3 we find we can speed up the translation time to about 18 seconds per sentence by using the following parameter setting tc 30 t 75 nc 20 n 400 and no 5for the resulting hypotheses file per increases only slightly from 514 to 516translation examples for the translation direction french to english under the s3 reordering constraint are given in table 17the french input sentences show some preprocessing that is carried out beforehand to simplify the translation task the translations produced are rather approximative in some cases although the general meaning is often preservedwe have presented a dpbased beam search algorithm for the ibm4 translation modelthe approach is based on a dp solution to the tsp and it gains efficiency by imposing constraints on the allowed word reorderings between source and target languagea datadriven search organization in conjunction with appropriate pruning techniques s3 i have the intention of speaking today about the many improvements in pensions for all canadians especially those programsinput chacun en lui mˆeme est tres complexe et le lien entre les deux le est encore davantage de sorte que pour beaucoup la situation presente est confuses3 each in itself is very complex and the relationship between the two is more so much for the present situation is confused is proposedfor the mediumsized verbmobil task a sentence can be translated in a few seconds on average with a small number of search errors and no performance degradation as measured by the word error criterion usedword reordering is parameterized using a set of four parameters in such a way that it can easily be adopted to new translation directionsa finitestate control is added and its usefulness is demonstrated for the translation direction german to english in which the word order difference between the two languages is mainly due to the german verb groupfuture work might aim at a tighter integration of the ibm4 model distortion probabilities and the finitestate control the finitestate control itself may be learned from training datathe applicability of the algorithm applied in the experiments in this article is not restricted to the ibm translation models or to the simplified translation model used in the description of the algorithm in section 3since the efficiency of the beam search approach is based on restrictions on the allowed coverage vectors c alone the approach may be used for different types of translation models as well on the other hand since the decoding problem for the ibm4 translation model is provably npcomplete as shown in knight and germann et al word reordering restrictions as introduced in this article are essential for obtaining an efficient search algorithm that guarantees that a solution close to the optimal one will be foundto quantify the reordering restrictions in section 35 the four nonnegative numbers numskip widthskip nummove and widthmove are used within the implementation of the dp search the restrictions are provided to the algorithm as an input parameter of the following type s numskip widthskip m nummove widthmove the meaning of the reordering string is as follows the two numbers following s that are separated by an underscore describe the way words may be skipped the two numbers following m that are separated by an underscore describe the way words may be moved during word reorderingthe first number after s and m denotes the number of positions that may be skipped or moved respectively the second number after s and m restricts the distance a word may be skipped or moved respectivelythese width parameters restrict the word reordering to take place within a window of a certain size established by the distance between the positions lmin and rmax as defined in section 35in the notation either the substring headed by s or that headed by m may be omitted altogether to indicate that the corresponding reordering is not allowedany numerical value in the string may be set to inf denoting that an arbitrary number of positions may be skippedmoved or that the moving or skipping distance may be arbitrarily largethe following reordering strings are used in this article word reordering description string e the empty string denotes the reordering restriction in which no reordering is alloweds 01 04 m 02 10 this string describes the germantoenglish word reordering up to one word may be skipped for at most 4 positions and up to two words may be moved up to 10 positionss 02 10 m 01 04 this string describes the englishtogerman word reordering up to two words may be skipped for at most 10 positions and up to one word may be moved for up to 4 positionss 03 inf this string describes the ibmstyle word reordering given in section 36up to three words may be skipped for an unrestricted number of positionsno words may be moveds inf inf or these strings denote the word reordering without m inf inf restrictions the word reordering strings can be directly used as input parameters to the dpbased search procedure to test different reordering restrictions within a single implementationacknowledgments 01 iv 601 a by the german federal this work has been supported as part of the ministry of education science research verbmobil project contract number and technology and as part of the eutrans
J03-1005
word reordering and a dynamic programming beam search algorithm for statistical machine translationin this article we describe an efficient beam search algorithm for statistical machine translation based on dynamic programming the search algorithm uses the translation model presented in brown et al starting from a dpbased solution to the travelingsalesman problem we present a novel technique to restrict the possible word reorderings between source and target language in order to achieve an efficient search algorithmword reordering restrictions especially useful for the translation direction german to english are presentedthe restrictions are generalized and a set of four parameters to control the word reordering is introduced which then can easily be adopted to new translation directionsthe beam search procedure has been successfully tested on the verbmobil task and on the canadian hansards task for the mediumsized verbmobil task a sentence can be translated in a few seconds only a small number of search errors occur and there is no performance degradation as measured by the word error criterion used in this articlein our work a beamsearch algorithm used for tsp is adapted to work with an ibm4 wordbased model and phrasebased model respectively
introduction to the special issue on the web as corpus the web teeming as it is with language data of all manner of varieties and languages in vast quantity and freely available is a fabulous linguists playgroundthis special issue of computational linguistics explores ways in which this dream is being exploredthe web is immense free and available by mouse clickit contains hundreds of billions of words of text and can be used for all manner of language researchthe simplest language use is spell checkingis it speculater or speculatorgoogle gives 67 for the former and 82000 for the latterquestion answeredlanguage scientists and technologists are increasingly turning to the web as a source of language data because it is so big because it is the only available source for the type of language in which they are interested or simply because it is free and instantly availablethe mode of work has increased dramatically from a standing start seven years ago with the web being used as a data source in a wide range of research activities the papers in this special issue form a sample of the best of itthis introduction to the issue aims to survey the activities and explore recurring themeswe first consider whether the web is indeed a corpus then present a history of the theme in which we view the web as a development of the empiricist turn that has brought corpora center stage in the course of the 1990swe briefly survey the range of webbased nlp research then present estimates of the size of the web for english and for other languages and a simple method for translating phrasesnext we open the pandoras box of representativeness we then introduce the articles in the special issue and conclude with some thoughts on how the web could be put at the linguists disposal rather more usefully than current search engines allowto establish whether the web is a corpus we need to find out discover or decide what a corpus ismcenery and wilson say in principle any collection of more than one text can be called a corpusbut the term corpus when used in the context of modern linguistics tends most frequently to have more specific connotations than this simple definition provides forthese may be considered under four main headings sampling and representativeness finite size machinereadable form a standard referencewe would like to reclaim the term from the connotationsmany of the collections of texts that people use and refer to as their corpus in a given linguistic literary or languagetechnology study do not fita corpus comprising the complete published works of jane austen is not a sample nor is it representative of anything elsecloser to home manning and schutze observe in statistical nlp one commonly receives as a corpus a certain amount of data from a certain domain of interest without having any say in how it is constructedin such cases having more training data is normally more useful than any concerns of balance and one should simply use all the text that is availablewe wish to avoid a smuggling of values into the criterion for corpushoodmcenery and wilson mix the question what is a corpus with what is a good corpus muddying the simple question is corpus x good for task y with the semantic question is x a corpus at all the semantic question then becomes a distraction all too likely to absorb energies that would otherwise be addressed to the practical oneso that the semantic question may be set aside the definition of corpus should be broadwe define a corpus simply as a collection of texts if that seems too broad the one qualification we allow relates to the domains and contexts in which the word is used rather than its denotation a corpus is a collection of texts when considered as an object of language or literary studythe answer to the question is the web a corpus is yesfor chemistry or biology the computer is merely a place to store and process information gleaned about the object of studyfor linguistics the object of study itself is found on computerstext is an information object and a computers hard disk is as valid a place to go for its realization as the printed page or anywhere elsethe onemillionword brown corpus opened the chapter on computerbased language study in the early 1960snoting the singular needs of lexicography for big data in the 1970s sinclair and atkins inaugurated the cobuild project which raised the threshold of viable corpus size from one million to by the early 1980s eight million words ten years on atkins again took the lead with the development of the british national corpus which raised horizons tenfold once again with its 100 million words and was in addition widely available at low cost and covered a wide spectrum of varieties of contemporary british english1 as in all matters zipfian logarithmic graph paper is requiredwhere corpus size is concerned the steps of interest are 1 10 100 not 1 2 3 corpora crashed into computational linguistics at the 1989 acl meeting in vancouver but they were large messy ugly objects clearly lacking in theoretical integrity in all sorts of ways and many people were skeptical regarding their role in the disciplinearguments raged and it was not clear whether corpus work was an acceptable part of the fieldit was only with the highly successful 1993 special issue of this journal using large corpora that the relation between computational linguistics and corpora was consummatedthere are parallels with web corpus workthe web is anarchic and its use is not in the familiar territory of computational linguisticshowever as students with no budget or contacts realize it is the obvious place to obtain a corpus meeting their specifications as companies want the research they sanction to be directly related to the language types they need to handle as copyright continues to constrain traditional corpus development2 as people want to explore using more data and different text types so webbased work will growthe web walked in on acl meetings starting in 1999rada mihalcea and dan moldovan used hit counts for carefully constructed search engine queries to identify rank orders for word sense frequencies as an input to a word sense disambiguation enginephilip resnik showed that parallel corporauntil then a promising research avenue but largely constrained to the englishfrench canadian hansardcould be found on the web we can grow our own parallel corpus using the many web pages that exist in parallel in local and in major languageswe are glad to have the further development of this work presented in this special issuein the student session of acl 2000 rosie jones and rayid ghani showed how using the web one can build a languagespecific corpus from a single document in that languagein the main session atsushi fujii and tetsuya ishikawa demonstrated that descriptive definitionlike collections can be acquired from the websince then there have been many papers at acl and elsewhere and we can mention only a fewthe eu meaning project takes forward the exploration of the web as a data source for word sense disambiguation working from the premise that within a domain words often have just one meaning and that domains can be identified on the webmihalcea and tchklovski complement this use of web as corpus with web technology to gather manual word sense annotations on the word expert web site3 santamaria et al in this issue discuss how to link word senses to web directory nodes and thence to web pagesthe web is being used to address data sparseness for language modelingin addition to keller and lapata and references therein volk gathers lexical statistics for resolving prepositional phrase attachments and villasenorpineda et al balance their corpus using web documentsthe information retrieval community now has a web track as a component of its trec evaluation initiativethe corpus for this exercise is a substantial sample of the web largely using documents in the gov top level domain as frozen at a given date the web has recently been used by groups at sheffield and microsoft among others as a source of answers for questionanswering applications in a merge of search engine and languageprocessing technologies are exploring the automatic population of existing ontologies using the web as a source for new instancesvarantola shows how translators can use justintime sublanguage corpora to choose correct target language terms for areas in which they are not expertfletcher demonstrates methods for gathering and using web corpora in a languageteaching contextone hundred million words is a large enough corpus for many empirical strategies for learning about language either for linguists and lexicographers or for technologies that need quantitative information about the behavior of words as input however for some purposes it is not large enoughthis is an outcome of the zipfian nature of word frequenciesalthough 100 million is a huge number and the bnc contains ample information on the dominant meanings and usage patterns for the 10000 words that make up the core of english the bulk of the lexical stock occurs less than 50 times in the bnc which is not enough to draw statistically stable conclusions about the wordfor rarer words rare meanings of common words and combinations of words we frequently find no evidence at allresearchers are obliged to look to larger data sources they find that probabilistic models of language based on very large quantities of data even if those data are noisy are better than ones based on estimates from smaller cleaner data setsanother argument is made vividly by banko and brill they explore the performance of a number of machine learning algorithms as the size of the training corpus grows from a million to a billion wordsall the algorithms steadily improve in performance though the question which is best gets different answers for different data sizesthe moral performance improves with data size and getting more data will make more difference than finetuning algorithmsdragomir radev has made a useful distinction between nlp giving and taking4 nlp can give to the web technologies such as summarization machine translation multilingual document retrieval questionanswering and other strategies for finding not only the right document but the right part of a document and tagging parsing and other core technologies taking is simply using the web as a source of data for any cl or nlp goal and is the theme of this special issueif we focus too closely on the giving side of the equation we look only at short to mediumterm goalsfor the longer term for giving as well as for other purposes a deeper understanding of the linguistic nature of the web and its potential for clnlp is requiredfor that we must take the web itself in whatever limited way as an object of studymuch web search engine technology has been developed with reference to language technologythe prototype for altavista was developed in a joint project between oxford university press and dec language identification algorithms now widely used in web search engines were developed as nlp technologythe special issue explores a homecoming of web technologies with the web now feeding one of the hands that fostered itthere were 56 million registered network addresses in july 1999 125 million in january 2001 and 172 million in january 2003a plot of this growth of the web in terms of computer hosts can easily be generatedlinguistic aspects take a little more work and can be estimated only by sampling and extrapolationlawrence and giles compared the overlap between page lists returned by different web browsers over the same set of queries and estimated that in 1999 there were 800 million indexable web pages availableby sampling pages and estimating an average page length of seven to eight kilobytes of nonmarkup text they concluded that there might be six terabytes of text available thenin 2003 google claims to search four times this number of web pages which raises the number of bytes of text available just through this one web server to over 20 terabytes from directly accessible web pagesat an average of 10 bytes per word a generous estimate for latinalphabet languages that suggests two thousand billion wordsthe web is clearly a multilingual corpushow much of it is englishxu estimated that 71 of the pages were written in english followed by japanese german french chinese spanish italian and swedish we have measured the counts of some english phrases according to various search engines over time and compared them with counts in the bnc which we know has 100 million wordstable 1 shows these counts in the bnc on altavista in 1998 and in 2001 and then on alltheweb in 2003for example the phrase deep breath appears 732 times in the bncit was indexed 54550 times by altavista in 1998this rose frequencies of english phrases in the bnc and on altavista in 1998 and 2001 and on alltheweb in 2003the counts for the bnc and altavista are for individual occurrences of the phrasethe counts for alltheweb are page counts to 170921 in 2001and in 2003 we could find 868631 web pages containing the contiguous words deep breath according to allthewebthe numbers found through the search engines are more than three orders of magnitude higher than the bnc counts giving a first indication of the size of the english corpus available on the webwe can derive a more precise estimate of the number of words available through a search engine by using the counts of function words as predictors of corpus sizefunction words such as the with and in occur with a frequency that is relatively stable over many different types of textsfrom a corpus of known size we can calculate the frequency of the function words and extrapolatein the 90millionword writtenenglish component of the bnc the appears 5776487 times around seven times for every 100 wordsin the yousdeclaration of independence the occurs 84 timeswe predict that the declaration is about 84 1007 1200 words longin fact the text contains about 1500 wordsusing the frequency of one word gives a first approximationa better result can be obtained by using more data pointsfrom the first megabyte of the german text found in the european corpus initiative multilingual corpus5 we extracted frequencies for function words and other short common wordswe removed from the list words that were also common words in other languages6 altavista provided on its results pages along with a page count for a query the number of times that each query word was found on the web7 table 2 shows the relative frequency of the words from our known corpus the index frequencies that altavista gave and the consequent estimates of the size of the germanlanguage web indexed by altavistawe set aside words which give discrepant predictions as altavista does not record in its index the language a word comes from so the count for the string die includes both the german and english occurrences and a word might be under or overrepresented in the training corpus or on the web averaging the remaining predictions gives an estimate of three billion words of german that could be accessed through altavista on the day in february 2000 that we conducted our test oder 000561180 13566463 2417488684 sind 000477555 11944284 2501132644 auch 000581108 15504327 2668062907 wird 000400690 11286438 2816750605 nicht 000646585 18294174 2829353294 eine 000691066 19739540 2856389983 sich 000604594 17547518 2902363900 ist 000886430 26429327 2981546991 auf 000744444 24852802 3338438082 und 002892370 101250806 3500617348 average 3068760356 this technique has been tested on controlled data in which corpora of different languages were mixed in various proportions and found to give reliable resultstable 3 provides estimates for the number of words that were available in 30 different latinscript languages through altavista in march 2001english led the pack with 76 billion words and seven additional languages already had over a billionfrom the table we see that even smaller languages such as slovenian croatian malay and turkish have more than one hundred million words on the webmuch of the research that has been undertaken on the bnc simply exploits its scale and could be transferred directly to these languagesthe numbers presented in table 3 are lower bounds for a number of reasons repeating the procedure after an interval the second author and nioche showed that the proportion of nonenglish text to english is growingin october 1996 there how can these large numbers be used for other languageprocessing tasksconsider the compositional french noun phrase groupe de travailin the memodata bilingual dictionary9 the french word groupe is translated by the english words cluster group grouping concern and collectivethe french word travail translates as work labor or labourmany web search engines allow the user to search for adjacent phrasescombining the possible translations of groupe de travail and submitting them to altavista in early 2003 yielded the counts presented in table 4the phrase work group is 15 times more frequent than any other and is also the best translation among the tested possibilitiesa set of controlled experiments of this form is described in grefenstette in grefenstettes study a good translation was found in 87 of ambiguous cases from german to english and 86 of ambiguous cases from spanish to englishwe know the web is big but a common response to a plan to use the web as a corpus is but it is not representative there are a great many things to be said about thisit opens up a pressing yet almost untouched practical and theoretical issue for computational linguistics and language technologyfirst representativeness begs the question representative of what outside very narrow specialized domains we do not know with any precision what existing corpora might be representative ofif we wish to develop a corpus of general english we may think it should be representative of general english so we then need to define the population of general englishlanguage events of which the corpus will be a sampleconsider the following issues writing or one of reading or hearingstandard conversations have for each utterance one speaker and one hearera times newspaper article has one writer and several hundred thousand readers song then does each individual singing constitute a distinct language production eventin the text domain organizations such as reuters produce news feeds that are typically adapted to the style of a particular newspaper and then republished is each republication a new writing eventapplication developers urgently need to know what to do about sublanguagesit has often been argued that within a sublanguage few words are ambiguous and a limited repertoire of grammatical structures is used this points to sublanguagespecific application developments being substantially simpler than generallanguage application developmenthowever many of the resources that developers may wish to use are generallanguage resources such as for english wordnet anlt xtag comlex and the bncare they relevant for building applications for sublanguagescan they be usedis it better to use a language model based on a large generallanguage corpus or a relatively tiny corpus of the right kind of textnobody knowsthere is currently no theory no mathematical models and almost no discussiona related issue is that of porting an application from the sublanguage for which it was developed to anotherit should be possible to use corpora for the two sublanguages to estimate how large a task this will be but again our understanding is in its infancymuch work in recent years has gone into developing language modelsclearly the statistics for different types of text will be different this imposes a limitation on the applicability of any language model we can be confident only that it predicts the behavior of language samples of the same text type as the trainingdata text type when a language technology application is put to use it will be applied to new text for which we cannot guarantee the text type characteristicsthere is little work on assessing how well one language model fares when applied to a text type that is different from that of the training corpustwo studies in this area are sekine and gildea both of which show substantial variation in model performance hits for spanish pensar que with and without possible dequeismos errors from allthewebcom not all items are errors the correct form is always at least 500 times more common than any potentially incorrect formweb texts are produced by a wide variety of authorsin contrast to paperbased copyedited published texts webbased texts may be produced cheaply and rapidly with little concern for correctnesson google a search for i beleave has 3910 hits and i beleive 70900the correct i believe appears on over four million pagestable 5 presents what is regarded as a common grammatical error in spanish comparing the frequency of such forms to the accepted forms on the weball the erroneous forms exist but much less often than the correct formsthe web is a dirty corpus but expected usage is much more frequent than what might be considered noisea language can be seen as a modest core of lexis grammar and constructions plus a wide array of different sublanguages as used in each of a myriad of human activitiesthis presents a challenge to generallanguage resource developers should sublanguages be includedthe three possible positions are the problem with the first position is that with all sublanguages removed the residual core gives an impoverished view of language the problem with the second is that it is arbitrarythe bnc happens to include cake recipes and research papers on gastrouterine diseases but not car manuals or astronomy textsthe third has not until recently been a viable optionto date corpus developers have been obliged to make pragmatic decisions about the sorts of text to go into a corpusatkins clear and ostler describe the desiderata and criteria used for the bnc and this stands as a good model for a generalpurpose generallanguage corpusthe word representative has tended to fall out of discussions to be replaced by the meeker balancedthe recent history of mathematically sophisticated modeling of language variation begins with biber who identifies and quantifies the linguistic features associated with different spoken and written text typeshabert and colleagues have been developing a workstation for specifying subcorpora according to text type using biberstyle analyses among othersin kilgarriff we present a first pass at quantifying similarity between corpora and cavaglia continues this line of workas mentioned above sekine and gildea directly address the relation between nlp systems and text type one further such item is roland et al buitelaar and sacaleanu explores the relation between domain and sense disambiguationa practical discussion of a central technical concern is vossen which tailors a generallanguage resource for a domainbaayen presents sophisticated mathematical models for word frequency distributions and it is likely that his mixture models have potential for modeling sublanguage mixtureshis models have been developed with a specific descriptive goal in mind and using a small number of short texts it is unclear whether they can be usefully applied in nlpalthough the extensive literature on text classification is certainly relevant it most often starts from a given set of categories and cannot readily be applied to the situation in which the categories are not known in advancealso the focus is usually on content words and topics or domains with other differences of genre or sublanguage remaining unexaminedexceptions focusing on genre include kessler nunberg and schutze and karlgren and cutting the web is not representative of anything elsebut neither are other corpora in any wellunderstood sensepicking away at the question merely exposes how primitive our understanding of the topic is and leads inexorably to larger and altogether more interesting questions about the nature of language and how it might be modeledtext type is an area in which our understanding is as yet very limitedalthough further work is required irrespective of the web the use of the web forces the issuewhere researchers use established corpora such as brown the bnc or the penn treebank researchers and readers are willing to accept the corpus name as a label for the type of text occurring in it without asking critical questionsonce we move to the web as a source of data and our corpora have names like april03sample77 the issue of how the text type can be characterized demands attentionone use of a corpus is to extract a language model a list of weighted words or combinations of words that describe how words are related how they are used with each other and how common they are in a given domainlanguage models are used in speech processing to predict which word combinations are likely interpretations of a sound stream in information retrieval to decide which words are useful indicators of a topic and in machine translation to identify good translation candidatesin this volume celina santamaria julio gonzalo and felisa verdejo describe how to build sensetagged corpora from the web by associating word meanings with web page directory nodesthe open directory project is a collaborative volunteer project for classifying web pages into a taxonomic hierarchysantamaria et al present an algorithm for attaching wordnet word senses to nodes in this same taxonomy thus providing automatically created links between word senses and web pagesthey also show how this method can be used for automatic acquisition of sensetagged corpora from which one could among other things produce language models tied to certain senses of words or for a certain domainunseen words or word sequencesthat is words or sequences not occurring in training dataare a problem for language modelsif the corpus from which a particular model is extracted is too small there are many such sequencestaking the second authors work as described above as a starting point frank keller and mirella lapata examine how useful the web is as a source of frequency information for rare items specifically for dependency relations involving two english words such as they generate pairs of common words constructing combinations that are and are not attested in the bncthey then compare the frequency of these combinations in a larger 325millionword corpus and on the webthey find that web frequency counts are consistent with those for other large corporathey also report on a series of humansubject experiments in which they establish that web statistics are good at predicting the intuitive plausibility of predicateargument pairsother experiments discussed in their article show that web counts correlate reliably with counts recreated using classbased smoothing and overcome some problems of data sparseness in the bncother very large corpora are available for english and the other three papers in the special issue all exploit the multilinguality of the webandy way and nano gough show how the web can provide data for an examplebased machine translation systemfirst they extract 200000 phrases from a parsed corpusthese phrases are sent to three online translation systemsboth original phrases and translations are chunkedfrom these pairings a set of chunk translations is extracted to be applied in a piecewise fashion to new input textthe authors use the web again at a final stage to rerank possible translations by verifying which subsequences among the possible translations are most attestedthe two remaining articles present methods for building aligned bilingual corpora from the webit seems plausible that such automatic construction of translation dictionaries can palliate the lack of translation resources for many language pairsphilip resnik was the first to recognize that it is possible to build large parallel bilingual corpora from the webhe found that one can exploit the appearance of language flags and other clues that often lead to a version of the same page in a different language10 in this issue resnik and noah smith present their strand system for building bilingual corpora from the weban alternative method is presented by wessel kraaij jianyun nie and michel simardthey use the resulting parallel corpora to induce a probabilistic translation dictionary that is then embedded into a crosslanguage information retrieval systemvarious alternative embeddings are evaluated using the clef multilingual information retrieval test beds
J03-3001
introduction to the special issue on the web as corpusthe web teeming as it is with language data of all manner of varieties and languages in vast quantity and freely available is a fabulous linguists playgroundthis special issue of computational linguistics explores ways in which this dream is being exploredit is natural to question the appropriateness of web data for research purposes because web data is inevitably noisy and search engines themselves can introduce certain idiosyncracies which can distort results
the web as a parallel corpus parallel corpora have become an essential resource for work in multilingual natural language processing in this article we report on our work using the strand system for mining parallel text on the world wide webfirst reviewing the original algorithm and results and then presenting a set of significant enhancements these enhancements include the use of supervised learning based on structural features of documents to improve classification performance a new contentbased measure of translational equivalence and adaptation of the system to take advantage of the internet archive for mining parallel text from the web on a large scale finally the value of these techniques is demonstrated in the construction of a significant parallel corpus for a lowdensity language pair parallel corpora have become an essential resource for work in multilingual natural language processingin this article we report on our work using the strand system for mining parallel text on the world wide webfirst reviewing the original algorithm and results and then presenting a set of significant enhancementsthese enhancements include the use of supervised learning based on structural features of documents to improve classification performance a new contentbased measure of translational equivalence and adaptation of the system to take advantage of the internet archive for mining parallel text from the web on a large scalefinally the value of these techniques is demonstrated in the construction of a significant parallel corpus for a lowdensity language pairparallel corporabodies of text in parallel translation also known as bitextshave taken on an important role in machine translation and multilingual natural language processingthey represent resources for automatic lexical acquisition they provide indispensable training data for statistical translation models and they can provide the connection between vocabularies in crosslanguage information retrieval more recently researchers at johns hopkins university and the university of maryland have been exploring new ways to exploit parallel corpora in order to develop monolingual resources and tools using a process of annotation projection and training given a parallel corpus in english and a less resourcerich language we project english annotations across the parallel corpus to the second language using wordlevel alignments as the bridge and then use robust statistical techniques in learning from the resulting noisy annotations for these reasons parallel corpora can be thought of as a critical resourceunfortunately they are not readily available in the necessary quantitiesuntil very recently for example statistical work in machine translation focused heavily on frenchenglish translation because the canadian parliamentary proceedings in english and french were the only large bitext availablethings have improved somewhat but it is still fair to say that for all but a relatively few language pairs parallel corpora tend to be accessible only in specialized forms such as united nations proceedings religious texts localized versions of software manuals and the likeeven for the top handful of majority languages the available parallel corpora tend to be unbalanced representing primarily governmental or newswirestyle textsin addition like other language resources parallel corpora are often encumbered by fees or licensing restrictionsfor all these reasons it is difficult to follow the more data are better data advice of church and mercer abandoning balance in favor of volume with respect to parallel textthen there is the world wide webpeople tend to see the web as a reflection of their own way of viewing the worldas a huge semantic network or an enormous historical archive or a grand social experimentwe are no different as computational linguists working on multilingual issues we view the web as a great big body of text waiting to be mined a huge fabric of linguistic data often interwoven with parallel threadsthis article describes our techniques for mining the web in order to extract the parallel text it containsit presents in revised and considerably extended form our early work on mining the web for bilingual text incorporating new work on contentbased detection of translations and efficient exploitation of the internet archivein section 2 we lay out the strand architecture which is based on the insight that translated web pages tend quite strongly to exhibit parallel structure permitting them to be identified even without looking at content we also show how we have improved strands performance by training a supervised classifier using structural parameters rather than relying on manually tuned thresholdsin section 3 we present an approach to detecting translations that relies entirely on content rather than structure demonstrating performance comparable to strands using this orthogonal source of informationin section 4 we describe how we have adapted the strand approach to the internet archive dramatically improving our ability to identify parallel web pages on a large scalesection 5 puts all the pieces together using structural and combined contentstructure matching of pages on the internet archive in order to obtain a sizable corpus of englisharabic web document pairsfinally we present our thoughts on future work and conclusionsstrand is an architecture for structural translation recognition acquiring natural dataits goal is to identify pairs of web pages that are mutual translationsin order to do this it exploits an observation about the way that web page authors disseminate information in multiple languages when presenting the same content in two different languages authors exhibit a very strong tendency to use the same document structure strand therefore locates pages that might be translations of each other via a number of different strategies and filters out page pairs whose page structures diverge by too muchin this section we describe how strand works and we also discuss several related webmining methods focusing on the overall architecture these systems have in common and the important systemspecific variationswe then show how tuning strands structural parameters using supervised training can significantly increase its performancefinding parallel text on the web consists of three main steps example of a candidate pairwe consider each of these steps in turn using the altavista search engines advanced search to search for two types of web pages parents and siblingsa parent page is one that contains hypertext links to differentlanguage versions of a document for example if we were looking for english and french bitexts the page at the left in figure 2 would lead us to one such candidate pairto perform this search for the englishfrench language pair we ask altavista for pages in any language that satisfy this boolean expression and a 10line distance filter is used to restrict attention to pages on which the english and french pointers occur reasonably close to one anotherspecifically those for which the regular expression is satisfied within 10 lines of the perl regular expression in the html sourcethis helps filter out a page that contained for example a link to english literature courses and also contained an unrelated link to french version at the topa sibling page is a page in one language that itself contains a link to a version of the same page in another language for example the page at the right of figure 2 contains a link on the left that says this page in english to perform this search for english pages matching a given french page we request pages in french that match the boolean expression anchorquotenglishquot or anchorquotanglaisquotmore recent versions of strand have added a spider component for locating pages that might have translationsgiven a list of web sites thought to contain bilingual text for a given language pair it is possible to download all the pages on each site any excerpts from a parent page and a sibling page the parent page is in italian and contains links marked italianoitalian francesefrench and ingleseenglish the sibling page is in dutch and contains a link marked this page in english in the leftmost column of which might have a translation on that sitealthough simple to implement this method of locating pages shifts the burden of narrowing down the possibilities to the process of generating candidate document pairsthe results reported here do not make use of the spider212 generating candidate pairspairing up potentially translated pages is simple when a search engine has been used to generate parent or sibling pages one simply pairs the two child pages to which the parent links or the sibling page together with the page to which it linkswhen all the pages on a site are under consideration the process is rather differentthe simplest possibility is to separate the pages on a site into the two languages of interest using automatic language identification throwing away any pages that are not in either language and then generate the cross productthis potentially leads to a very large number of candidate page pairs and there is no particular reason to believe that most of them are parallel translations other than the fact that they appear on the same web sitethe spider component of strand adds a urlmatching stage exploiting the fact that the directory structure on many web sites reflects parallel organization when pages are translations of each othermatching is performed by manually creating a list of substitution rules 1 and for each english url applying all possible rules to generate urls that might appear on the list of pages for the other languageif such a url is found the pair with similar urls is added to the list of candidate document pairsfor example suppose an englishchinese site contains a page with url on which one combination of substitutions might produce the url the original page and the produced url are probably worth considering as a likely candidate pairowing to the combinatorics only a fixed number of substitution combinations can be tried per english url however in section 43 we describe a more scalable urlmatching algorithmanother possible criterion for matching is the use of document lengthstexts that are translations of one another tend to be similar in length and it is reasonable to assume that for text e in language 1 and text f in language 2 length clength where c is a constant tuned for the language pairthe use of a document length filter is described in smith in which such a filter is shown at the sentence level to reduce the size of the search space exponentially in the confidence p in a confidence interval for a linear regression model with only linear loss of good pairs relies on analysis of the pages underlying html to determine a set of pairspecific structural values and then uses those values to decide whether the pages are translations of one anotherthe first step in this process is to linearize the html structure and ignore the actual linguistic content of the documentswe do not attempt to exploit nonlinear structure for two reasonsfirst we suspect that many html authors use tags for formatting text rather than for indicating document structure therefore any tree structure is likely to be inconsistent or poorly matchedsecond we required the matching algorithm to be fast and algorithms for aligning tree structures are more demanding than those for linear structuresboth documents in the candidate pair are run through a markup analyzer that acts as a transducer producing a linear sequence containing three kinds of token the chunk length is measured in nonwhitespace bytes and the html tags are normalized for caseattributevalue pairs within the tags are treated as nonmarkup text the second step is to align the linearized sequences using a standard dynamic programming technique for example consider two documents that begin as follows using this alignment we compute four scalar values that characterize the quality of the alignment dp the difference percentage indicating nonshared material n the number of aligned nonmarkup text chunks of unequal length r the correlation of lengths of the aligned nonmarkup chunks p the significance level of the correlation r the difference percentage quantifies the extent to which there are mismatches in the alignment sequence tokens on one side that have no corresponding token on the other sidein the example above one document contains an h1 header that is missing from the second documentlarge numbers of such mismatches can indicate that the two documents do not present the same material to a great enough extent to be considered translationsthis can happen for example when two documents are translations up to a point but one document goes on to include a great deal more content than anothereven more frequently the difference percentage is high when two documents are prima facie bad candidates for a translation pairthe number of aligned nonmarkup text chunks helps characterize the quality of the alignmentthe dynamic programming algorithm tries to optimize the correspondence of identical tokens which represent markup2 as a side effect the nonmarkup text chunks are placed in correspondence with one another the more such pairings are found the more likely the candidate documents are to represent a valid translation pairthe remaining two parameters quantify the extent to which the corresponding nonmarkup chunks are correlated in lengthwhen two documents are aligned with one another and are valid translations there is a reliably linear relationship in the length of translated chunks of text short pieces correspond with short pieces medium with medium and long with longthe pearson correlation coefficient r for the lengths will be closer to one when the alignment has succeeded in lining up translated pieces of text and the p value quantifies the reliability of the correlation for example the standard threshold of p 0 an edge exists between them with weight logpif a word x may link to null with nonzero probability then that potential link is added as an additional edge in the graph between x and a null vertex added to v2 each such x gets its own null vertex so that multiple words may ultimately link to nulla sum of weights of links in a matching will be the logprobability of the link sequence and maximizing that sum maximizes the probabilitythe similarity score should be high when many of the link tokens in the best sequence do not involve null tokensfurther it should normalize for text lengthspecifically the score is this score is an application of lins informationtheoretic definition of similaritystarting with a set of axioms relating intuitions about similarity to the mathematical notion of mutual information lin derives the measure where x and y are any objects generated by a probabilistic modelthis technique of using a translation model to define translational similarity is generic to different sources of lexical translation informationan important feature is that it can be used with any symmetric translation model in which events can be divided into those that both sides of a bitext have in common and those that affect only one sidethe measure is simplified by assuming that all links in a given translation lexicon are equiprobablethe assumption reduces the formula for tsim to number of twoword links in best matching number of links in best matching the key reason to compute tsim under the equiprobability assumption is that we need not compute the mwbm but may find just the maximum cardinality bipartite matching since all potential links have the same weightan o for this purpose algorithm exists for mcbm for example if the matching shown in figure 4 is the mcbm then tsim 47 under the simplifying assumptionin earlier work we sought to show how multiple linguistic resources could be exploited in combination to recognize translation and how the equiprobability assumption allowed straightforward combination of resources in section 321 we provide a clean solution to the problem of using unweighted translation lexicons along with probabilistic ones that improves performance over the earlier resultthis would appear to make the equiprobability assumption unnecessary however we found that if p is set to the empirically estimated joint probability of the lexical link type then performance turns out to be dismalthis is understandable using parameter estimation techniques like the one we used a great deal of probability mass in the distribution p tends to go to frequent words which are relatively uninformative with regard to whether texts are mutual translationsthe equiprobability assumption helps to counteract this in fact one could apply scoring techniques from information retrieval and crosslingual information retrieval in weighting the lexiconwe leave this area of exploration to future workmelamed used a greedy approximation to mwbm called competitive linkingcompetitive linking iteratively selects the edge with the highest weight links the two vertices of the edge then removes them from the grapha heapbased implementation of competitive linking runs in o log maxunder the equiprobability assumption all the weights are the same so that competitive linking proceeds simply by randomly making legal links until no more can be madeif definition is applied to pairs of documents in the same language with a translation lexicon defined by the identity relation then tsim is a variant of resemblance as defined by broder et al for the problem of monolingual duplicate detection except that tsim has the advantage of being tokenbased rather than typebased incorporating word frequencywe have demonstrated that the tsim score can be used to extract translationally equivalent englishchinese sentence pairs from even a noisy space with high precision it was also shown that combining multiple sources of wordlevel translation information had positive effects on performance on the sentencematching taskthese information sources were presumed to be extremely noisy though no independent evaluation was carried out on themif the ad hoc translation lexicon induction techniques used here give good performance then better techniques might lead to further improvementin addition the competitive linking approximation was shown to perform nearly as well as mcbmwe now apply our contentbased similarity measure to the candidate pair classification task presented by strandrecall that both the original strand classifier and those learned using decision tree methods described in section 223 employ only structural features of the documents to determine whether they are translationshere we apply the tsim score to the same task and compare the results with those of the original strand classifier sourceswe begin with an englishfrench dictionary 7 next a wordtoword translation model was trained on the dictionarynote that the parameter estimation task here is very simple in most cases the pairs are oneword to oneword making the hidden link structure unambiguous the training primarily served the purpose of breaking down multiword entries informed by the rest of the entries so as to obtain a fully onewordtooneword dictionarythe training procedure was an expectationmaximization procedure like that used by melamed except that maximum weighted bipartite matching was used instead of competitive linkingany entry containing a null was then removedwe add to this dictionary a list of englishfrench cognate pairs identified using the method of tiedemann tiedemanns approach involved learning languagespecific character weights for the computation of weighted edit distance to measure cognatenesshe used a list of known cognates to train the weightswe instead used the weighted translation pairs in a translation model lexicon built from the bible8 the result is 35513 word pairs from the corpus of web pages under considerationan additional set of 11264 exact string matches were added qualitatively these entries were highly noisy a random selection of the cognate pairs is shown in table 2all of these word pairs were added to the dictionary each with a count of onewe took the enhanced dictionary with counts to define a dirichlet prior which is the conjugate prior to a multinomial distribution over discrete events such a prior is characterized by counts of all such events when it is used in an them procedure these prior counts are added to those produced by the e step on every iterationintuitively if a word pair is expected to be a likely lexical word pair in the dictionary and cognate set then models that make probable are more likely therefore the expected count of is increased at each iteration of training to the extent that the prior favors itusing the enhanced weighted lexicon as a dirichlet prior a wordtoword translation model was trained on a versealigned bible as before we used the maximum weighted bipartite matching algorithmthe final lexicon consists of all word pairs with nonzero probability and contains 132155 entriesnote that all word pairs in the enhanced dictionary are included we have merely added to that dictionary by bootstrapping additional entries from the bible322 resultsin order to compare tsim with structural similarity scoring we applied it to 325 englishfrench web document pairs for which human evaluations were carried out in section 2as there is only one feature under consideration the classifier must be a threshold on that valueat different thresholds cohens κ score of agreement two judges and their intersection may be computed for comparison with strand along with recall and precision against a gold standard the gold standard contained 86 page pairs marked as good by both judges and 174 page pairs marked as bad by both judges9 computing tsim is not tractable for very large documents and translation lexiconshowever in preliminary comparisons we found that representing tsim for long documents by as few as their first 500 words results in excellent performance on the r measure10 this allows o estimation of tsim for two documentsfurther the competitive linking algorithm appears to be as reliable as mcbm and it runs significantly faster in practicethe results reported here approximated tsim in this wayof the 325 pairs 32 were randomly selected as a development set which we used to select manually a threshold t 044this value maximized the r score against goldstandard human judgments on the development set11 r scores against each judge and their intersection were then computed at that threshold on the test set these are compared to r scores of the strand system on the same test set in table 3in every case the tsim classifier agreed more strongly with the human evaluations and its f score is higher than that of strandfigure 5 shows r and the f measure plotted against t in this application the contentbased classifier complements the structural classifiers high precisiongiven two highperforming methods that use orthogonal information for identifying good candidate pairs the natural question is whether the techniques can be combined for even better performancewe repeated the experiment presented in section 223 adding the tsim score as a featurethe same crossvalidation setup was used with the same division into foldsprecision and recall results are reported in table 4the decision trees learned were once again all quite similar with eight of the nine rooted as follows the remainder of each tree varied and there was some evidence of overtrainingthese results clearly demonstrate the benefit of combining structural and contentbased approacheswe next describe how we have adapted the strand architecture to the internet archive in order to generate the candidate pairs on a scale that has previously been unattainableone of the difficulties with doing research on web mining is that largescale crawling of the web is a significant enterprisechen and nies ptminer represents one solution a carefully thoughtout architecture for mining on a large scalehere we present a different solution taking advantage of an existing largescale repository of web pages maintained on an ongoing basis by an organization known as the internet archivethe internet archive is a nonprofit organization attempting to archive the entire publicly available web preserving the content and providing free access to researchers historians scholars and the general publicdata come from crawls done by alexa internet and hence they represent an industrylevel resource of the sort not easily constructed within academiaat present the archive contains 120tb of data by a conservative estimate and it is growing at approximately 8tb per monthtext on the archive comprises over 10 billion web pages and the estimated duplicate rate is a factor of two 12 the internet archive provides public access to the data via the wayback machine web interfaceas of this writing a search for the acl home page brings up links to 72 snapshots of that page dating back to june 7 199713 the reader can get to that page directly on the wayback machine using a url that points to the internet archive and provides both the desired page and the time stamp indicating which snapshot to retrieve14 the archive also provides researchers with free direct access to its data via accounts on their clusterthe data are stored on the disk drives of approximately 300 machines each running some variety of unix creating what is in essence one huge file systemthis provides a researcher with the remarkable sensation of having the entire web on his or her hard drivemining terabytes on the archive presents a number of challenges the last of these the cluster computing tools15 is turned out to reduce drastically the time needed to port strand to the archive as well as the size of the strand code basethe centerpiece in archive cluster computing is a parallelization tool called p2 which offers a unix commandline interface that allows one to specify a parallelizable task a way to split it up a way to combine the results and a set of processors among which to divide the taskthe p2 tool divides up tasks intelligently invoking each parallel computation on the local machine where the data residein adapting strands threestage process to the internet archive the primary challenge was in the first two steps locating possible translations and matching them up to produce candidate document pairsstructural filtering remained essentially unchangedgenerating candidate pairs on the archive involves the following steps steps 1 and 2 are performed via a parallel search operation plus combination of results for example extracting all urls in the hong kong taiwan or china domains using a pattern like 16 step 3 is potentially tricky owing to computational complexity issuesas noted in section 212 examining the cross product of a sites page sets in two different languages is potentially very expensive and matching documents by similarity of urls can represent a combinatoric process in the general caseexample of lss subtractionwe arrived at an algorithmically simple solution that avoids this problem but is still based on the idea of languagespecific substrings the idea is to identify a set of languagespecific url substrings that pertain to the two languages of interest for example a set of lsss for englisharabic might be as follows 1256 437 864 88591 88596 a ar ara arab arabic cp1256 cp437 cp864 e en eng english gb iso iso88591 iso88596 latin latin1 latin1 uk us usa for each url we form a handle by subtracting any substrings that match any item on the lss pattern listthe subtraction process is implemented reasonably efficiently if there are p patterns with maximum length l and the urls length in characters is you then the current implementation will do at most p you string matches of length no more than l17 in practice this is extremely fast we can generate handles for nearly 5000 urls per second on a sixyearold sun ultra 1 workstationfigure 6 illustrates handle generation on two real urlsas one would hope these two urls produce the same handle and as a result they wind up in the same bucket in step 318 in step 4 the urls in each bucket are used to generate candidate pairs by taking the cross product and keeping those url pairs for which the url bookkeeping data indicate pages that are in the correct languagesfor example given the bucket containing the two urls in figure 6 this step would generate a single pair consisting of be more efficient to create buckets by doing a parallel sort of the entire url set using the handle as the key and then creating buckets based on identical handles being on adjacent lines the url for the english page and the url for the arabic page assuming the language id information associated with each url confirmed it was in the proper language19 at this point the candidate generation process is completethe final step is to apply strands filtering step to each candidate pair an operation that can itself be parallelized since each candidate pair can be processed independentlythe filtering pass will eliminate those page pairs whose urls show similarity to each other but whose content andor structure do notit is interesting to note that by taking advantage of the archives p2 cluster computing tool together with its simple flattext representations adapting strands candidate generation process resulted in a dramatic reduction in the size of the program cutting it literally in half as measured in lines of codein the previous sections we have described methods and results for structural matching for contentbased matching and for dramatically scaling up the number of candidate pairs that can be generated for any given language pair by using the industrialstrength web crawls stored on the internet archivein this section we put all these pieces together describing an experiment in mining the internet archive to find englisharabic parallel textthe language pair englisharabic is of particular global importance but resources for it particularly bilingual text have generally not been easy to obtainmoreover arabic is far behind on the webs exponential growth curve arabic text did not really start emerging on the web until the release of microsoft windows 98 which provided arabic support in its version of internet explorer20 the input resources for our search for englisharabic candidate pairs were a list of internet domains likely to contain arabic text21 the list included 24 toplevel national domains for countries where arabic is spoken by a significant portion of the population egypt saudi arabia kuwait etcin addition we used a list of com domains known to originate in arabicspeaking countriesthis list provided an additional 21 specific domains note that the list is by no means exhaustivein the experiments we report here we mined two crawls from 2001 comprising 8tb and 12tb spread over 27 machinesour list of urls with relevant domains obtained through pattern matching in archive index files numbers 19917923 pages22 the languagespecific substrings given earlier were subtracted from these urls to generate handles resulting in 786880 buckets with an average of 25 pages per bucketwhen all possible englisharabic page pairs were generated from all 19 the internet archive tags its data for language using standard ngram language identification techniques buckets the result was 8294 candidate pairsthis number is lower than what might be expected given the huge number of buckets because many buckets were monolingual note that only pairs of one english and one arabic document are deemed to be candidatesa random sample of two hundred candidate pairs was given to two human evaluators bilingual in english and arabic who were asked to answer for each pair the question is this pair of pages intended to show the same material to two different users one a reader of english and the other a reader of arabic the judges answers showed a cohens κ agreement of 06955 which is generally considered fair to good reliabilitytaking the set of 149 labeled pairs on which the two judges agreed we carried out an evaluation of the full candidate set similar to the one for englishfrench discussed in section 223this was a threefold crossvalidation experiment in which decision tree classifiers were tuned on the features extracted for each candidate pair by structurebased classification23 in addition to the four structural scores we included two language identification confidence scores these were available as part of the internet archives bookkeeping information for each url and required no additional computation on our parttable 5 shows precision and recall of each folds classifier applied to the corresponding test set of page pairsthe value of the parametertuning process is dramatically confirmed by comparing the learned parameters with strands default parameters note however that the candidate generation system is highly precise to begin with only around 10 of the pairs in the random sample of candidates were considered bad by both judgesa baseline system in which no filtering is done at all achieves 8993 precision on the full labeled set depending on the relative importance of precision and recall these structurebased classifiers might be considered worse than that baselineupon inspection we discovered that nearly 5000 of the pairs in our candidate set were from a single domain this site supports an online marketplace and many of the pages discovered by our search were dedicated to specific merchandise categories within that service a large portion of these were simply no items available and one or two similar messageswe ignored this domain completely in order to be conservative about the yield of page pairs though we note that many of the pages within it are legitimate parallel text that could be extracted if a good duplicates filter were applied24 in order to construct a final classifier we trained a decision tree on all 149 of the manually judged examples on which both judges agreedthis was then applied to the candidate pairs producing a set of 1741 html document pairs hypothesized to be valid translations of one anotherby way of simple duplicate detection if a pair of urls appeared multiple times it was counted only oncethe remaining set contained 1399 pairs25 converting from html to plain text and tokenizing the english documents in this corpus total approximately 673108 tokens with an average of 481 tokens per document the arabic side contains 845891 tokens averaging 605 tokens per document26 we combined the structural and contentbased approaches to detecting translations by adding the tsim score to the set of structural features associated with each candidate pair and then training a new decision tree classifierbecause arabic is a highly inflected language with many surface forms we found it necessary to use morphological preprocessing in order to make effective use of a dictionaryfor english we tokenized the text and used the wordnet lemmatizer to strip suffixesthe arabic texts were tokenized at punctuation then romanized and converted to root forms using a morphological analysis tool this approximately halved the vocabulary size for the arabic texts the translation lexicon used to compute tsim contained 52211 entries each containing one english lemma and one arabic root27 of these 16944 contained two items that were both present in the candidate set of 8294 web page pairsthe approximations discussed in section 322 were employed competitive linking on the first 500 words in each document was used to compute the scorecarrying out the same crossvalidation experiment the combined structural and contentbased classifier produced the results in table 6also shown is the performance of the tsimonly classifier assuming an optimal threshold is chosenaveraged over three folds the classifier achieved 9506 precision and 9848 recall after building a single classifier on all 149 test pairs we reclassified the entire candidate setignoring again pages from the domain 2206 pairs were marked as translationsthe same crude duplicate filter was applied cutting the set back to 1821 pairs28 table 7 shows word counts for various tokenization schemes the morphological analysis used for computing tsim the egypt tokenizer and counting only tokens with some alphabetic character from the egypt tokenizer the analogous results using the classifier from section 52 are shown for comparisonto summarize the results using the contentbased similarity score as a feature not only improved precision it increased the size of the corpus by 5163 depending on the tokenization scheme29a number of the techniques we have used to mine parallel data from the web can be improved and we suggest here some directions28 there were 1796 unique english urls and 1779 unique arabic urls giving document duplication rates of 14 and 24 respectively29 a list of wayback machine urls is available at a sample of the document pairs is included in appendix awith respect to classifying document pairs as translations the reader will notice that our approach to contentbased crosslingual similarity essentially boils down to a greedy matching of some of the words in a document pair using a dictionaryit remains to be seen whether weights in the dictionary can be exploited we suggest that the incorporation of scores from information retrieval might be useful in discerning which lexicon entries are the strongest cues of translational equivalencewe also have not explored any filtering on the noisy translation lexicon doing so might improve the quality of the tsim scorethe competitive linking approximation and the use of only the initial portion of each document provide significant computational savingsin our experience neither of these has significantly hurt the performance of tsimbased classifiers and in some cases competitive linking seems to improve performanceit is possible that some sample selection of words from document candidates might be profitable smith suggested a bootstrapping paradigm for the construction of parallel corporabeginning with a seed set of translation information highprecision initial classifiers might be constructed using content andor structural features we might then iteratively select additional page pairs in which the current classifier has high confidence of translational equivalence gradually increasing the pool of parallel data and at the same time expanding the bilingual lexiconthis approach to minimally supervised classifier construction has been widely studied especially in cases in which the features of interest are orthogonal in some sense with respect to the generation of candidate pairs we have described a progression from indexbased searches on altavista to exhaustive matching of urls on the internet archivethe combination of these approaches may be profitable particularly for languages that are represented only very sparsely on the webfor such languages indexbased searches on words from a language of interest might be used to identify sites potentially containing parallel textwithin such sites it would likely be profitable to look for parallel documents in the full cross product of documents in the two languages of interest obtained both on the internet archive and via crawling all pages on relevant sitesfinally we plan to utilize parallel texts mined from the web in our work on machine translation and acquisition of bilingual lexicons and in the creation of resources for new languages via projection of annotations from englishalthough efforts at discovering parallel text on the web were first reported in 1998 webbased parallel corpora appear to have had only a limited impact on the communitythree reasons for this suggest themselvestoo few languagesparallel text from the web has been made available to the community in only a few pairs of languagesas of this writing the strand web site presenting url pairs discovered via strand runs contains collections only for englishfrench englishchinese englishbasque and now englisharabic and we are not aware of any other efforts to disseminate webbased parallel data publiclyup to this point it simply has not been easy to search the web for parallel text in new language pairsthe most difficult part is finding the candidates a year or two ago we attempted to apply the original webbased strand to the problem of finding englisharabic text and we were unable to locate enough search engine hits or sites to yield useful resultstoo little datavery large webbased parallel text collections are not available to the communitythe largest appear to have been obtained by chen and nie who acquired collections on the order of 15000 document pairs for englishfrench englishgerman englishdutch englishitalian and englishchinese using the ptminer systemhowever these collections have not been made available to the general community30 in contrast the strand collections which are available to the community in the form of url pairs are modest in size the englishchinese collection contains fewer than 3500 document pairs and the englishfrench fewer than 2500difficulty with disseminationwebbased collections are difficult to distributestandard mechanisms of the sort used by the ldc are fraught with difficult legal issues since technically speaking redistributing the actual content of web pages could require permission from the author of every pagefor example presumably as a risk reduction strategy the web track for trec2002 limited its attention to the gov domain and required the recipient of the data to sign a form that reduced the distributors liability31 similarly the google programming contest data set arrived with a limiteduse license indemnification from thirdparty claims and a collection limited to the edu domain from which presumably authors are less likely to bring expensive lawsuits a possible fourth reason may have to do with questions about the utility of the datafor example a webbased parallel collection may be unpredictable in terms of its coverage and the community is well aware of the dangers of using training data that are not representative of the test domaina solution to this problem might be to extract topically relevant subsets of the collection for particular domains or applications but of course this requires a more is better approach in order to obtain subsets that are large enough to be usefulthe work reported in this article addresses each of these major problemswith respect to the number of language pairs the internet archive offers us a huge sample of pages on the web and our techniques make it easy to explore that collection in an efficient wayalthough it is probably impossible to crawl more than a small fraction of the web the internet archive is storing the results of commercialscale web crawling and has as its explicit mission the permanent storage of everything that can be foundthe fact that we were able to find a substantial quantity of englisharabic text offers the hope that it will be possible to find data for the less wellrepresented language pairs if and when those data actually existmoreover the final implementation we described here retains the almost entirely languageindependent character of the original strand system adding only the requirement of a reasonable translation lexicontherefore success in mining for parallel text in other languages depends primarily on whether the data exist in the archivewith regard to corpus size we demonstrated that the recall of structural matching and hence its yield can be significantly improved by simple and automatic classifier construction requiring only a few hours work from a bilingual annotator to create the training materialthese results are further improved by adding contentbased similarity as a featureour success with englisharabic a language pair that is not one of those usually considered well represented on the web encourages us to believe that for other languages of interest we will be similarly successfulwe have also done a bit of exploration to gauge the potential of the archive for betterrepresented language pairs using englishchinese as an exampleby way of context chen and nie reported that ptminer found around 15000 englishchinese document pairs by crawling 185 sites in the hk domain with the run taking about a weekwe did a strand search of the two internet archive crawls used in the englisharabic study seeking englishchinese parallel text in multiple domains where chinese is a dominant language our initial candidate pair set was generated in approximately 30 hours and contains over 70000 candidate page pairswe are optimistic that this can be improved still further by expanding the search to include all sites that contain at least one chinese document regardless of the domainin terms of dissemination the strand distribution mechanism models itself after web search engines distributing the urls rather than the pages themselvesthis places the legal burden on individual users who are presumably safe under fair use provisions if they download pages for their individual useuntil recently the difficulty with this solution has been that the collection of urls deteriorates over time as sites disappear pages are reorganized and underlying content changes for example in april 2002 we attempted to download the documents in the strand englishfrench englishchinese and englishbasque collections and we were able to access successfully only around 67 43 and 40 of the url pairs respectivelyhowever the internet archives wayback machine provides a way to distribute persistent urlswith regard to the quality of the data in section 222 we discussed two studies that demonstrate the utility of parallel web data in acquiring translation lexicons for crosslanguage information retrievalwe also reported on the results of a human ratings study which provided evidence that englishchinese data mined from the web contain reasonably fluent reasonably translated sentence pairsit is worth pointing out that because strand expects pages to be very similar in structural terms the resulting document collections are particularly amenable to sentence or segmentlevel alignmentindeed just using dynamic programming to align the markup ignoring the text produces reasonable firstpass alignments of the intervening text as a side effectwe are currently adapting statistical textbased sentence alignment techniques to take advantage of the markup available in webbased document pairsultimately the utility of parallel data from the web is a question that will need to be addressed in practicethe potential of course is as rich and diverse as the web itself and what we as researchers can do with it is an exciting question that remains to be answeredthe following page pairs are representative of englisharabic parallel corpus extracted from the internet archivethe text from the pages is shown in fullnote that the full corpus is available as a list of wayback machine urls at these pages show the generally high quality of the corpus and also illustrate some of the potential difficulties with parallel web datafor example the arabic page in the first pair includes an additional caption not present in the english sidethese kinds of problems are expected to be overcome during sentence alignment processingfor each item participants were instructed to provide three ratingsquality of the englishkeezer for permitting and facilitating our use of the internet archivefinally we are indebted to several computational linguistics reviewers whose comments helped us to greatly improve this article
J03-3002
the web as a parallel corpusparallel corpora have become an essential resource for work in multilingual natural language processingin this article we report on our work using the strand system for mining parallel text on the world wide web first reviewing the original algorithm and results and then presenting a set of significant enhancementsthese enhancements include the use of supervised learning based on structural features of documents to improve classification performance a new content based measure of translational equivalence and adaptation of the system to take advantage of the internet archive for mining parallel text from the web on a large scalefinally the value of these techniques is demonstrated in the construction of a significant parallel corpus for a lowdensity language pairwe mine parallel web documents within bilingual web sites first and then extract bilingual sentences from mined parallel documents using sentence alignment methodwe exploit the similarities in url structure document structure and other clues for mining the web for parallel documents