preprocessed_text
stringlengths
43
152k
preprocessed_summary
stringlengths
55
3.02k
finding parts in very large corpora we present a method for extracting parts of objects from wholes ( e.g . & quot ; speedometer & quot ; from & quot ; car & quot ; ) . given a very large corpus our method finds part words with 55 % accuracy for the top 50 words as ranked by the system . the part list could be scanned by an end-user and added to an existing ontology ( such as wordnet ) , or used as a part of a rough semantic lexicon . we present a method of extracting parts of objects from wholes ( e.g . & quot ; speedometer & quot ; from & quot ; car & quot ; ) . to be more precise , given a single word denoting some entity that has recognizable parts , the system finds and rank-orders other words that may denote parts of the entity in question . thus the relation found is strictly speaking between words , a relation miller [ 1 ] calls & quot ; meronymy. & quot ; in this paper we use the more colloquial & quot ; part-of & quot ; terminology . we produce words with 55 % accuracy for the top 50 words ranked by the system , given a very large corpus . lacking an objective definition of the part-of relation , we use the majority judgment of five human subjects to decide which proposed parts are correct . the program 's output could be scanned by an enduser and added to an existing ontology ( e.g. , wordnet ) , or used as a part of a rough semantic lexicon . to the best of our knowledge , there is no published work on automatically finding parts from unlabeled corpora . casting our nets wider , the work most similar to what we present here is that by hearst [ 2 ] on acquisition of hyponyms ( & quot ; isa & quot ; relations ) . in that paper hearst ( a ) finds lexical correlates to the hyponym relations by looking in text for cases where known hyponyms appear in proximity ( e.g. , in the construction ( np , np and ( np other nn ) ) as in & quot ; boats , cars , and other vehicles & quot ; ) , ( b ) tests the proposed patterns for validity , and ( c ) uses them to extract relations from a corpus . in this paper we apply much the same methodology to the part-of relation . indeed , in [ 2 ] hearst states that she tried to apply this strategy to the part-of relation , but failed . we comment later on the differences in our approach that we believe were most important to our comparative success . looking more widely still , there is an evergrowing literature on the use of statistical/corpusbased techniques in the automatic acquisition of lexical-semantic knowledge ( [ 3-8 ] ) . we take it as axiomatic that such knowledge is tremendously useful in a wide variety of tasks , from lower-level tasks like noun-phrase reference , and parsing to user-level tasks such as web searches , question answering , and digesting . certainly the large number of projects that use wordnet [ 1 ] would support this contention . and although wordnet is hand-built , there is general agreement that corpus-based methods have an advantage in the relative completeness of their coverage , particularly when used as supplements to the more laborintensive methods . webster 's dictionary defines & quot ; part & quot ; as & quot ; one of the often indefinite or unequal subdivisions into which something is or is regarded as divided and which together constitute the whole. & quot ; the vagueness of this definition translates into a lack of guidance on exactly what constitutes a part , which in turn translates into some doubts about evaluating the results of any procedure that claims to find them . more specifically , note that the definition does not claim that parts must be physical objects . thus , say , & quot ; novel & quot ; might have & quot ; plot & quot ; as a part . in this study we handle this problem by asking informants which words in a list are parts of some target word , and then declaring majority opinion to be correct . we give more details on this aspect of the study later . here we simply note that while our subjects often disagreed , there was fair consensus that what might count as a part depends on the nature of the word : a physical object yields physical parts , an institution yields its members , and a concept yields its characteristics and processes . in other words , & quot ; floor & quot ; is part of & quot ; building & quot ; and & quot ; plot & quot ; is part of & quot ; book. & quot ; our first goal is to find lexical patterns that tend to indicate part-whole relations . following hearst [ 2 ] , we find possible patterns by taking two words that are in a part-whole relation ( e.g , basement and building ) and finding sentences in our corpus ( we used the north american news corpus ( nanc ) from ldc ) that have these words within close proximity . the first few such sentences are : ... the basement of the building . ... the basement in question is in a four-story apartment building ... ... the basement of the apartment building . from the building 's basement ... ... the basement of a building ... ... the basements of buildings ... from these examples we construct the five patterns shown in table 1 . we assume here that parts and wholes are represented by individual lexical items ( more specifically , as head nouns of noun-phrases ) as opposed to complete noun phrases , or as a sequence of & quot ; important & quot ; noun modifiers together with the head . this occasionally causes problems , e.g. , & quot ; conditioner & quot ; was marked by our informants as not part of & quot ; car & quot ; , whereas & quot ; air conditioner & quot ; probably would have made it into a part list . nevertheless , in most cases head nouns have worked quite well on their own . we evaluated these patterns by observing how they performed in an experiment on a single example . table 2 shows the 20 highest ranked part words ( with the seed word & quot ; car & quot ; ) for each of the patterns a-e. ( we discuss later how the rankings were obtained . ) table 2 shows patterns a and b clearly outperform patterns c , d , and e. although parts occur in all five patterns , the lists for a and b are predominately parts-oriented . the relatively poor performance of patterns c and e was anticipated , as many things occur & quot ; in & quot ; cars ( or buildings , etc . ) other than their parts . pattern d is not so obviously bad as it differs from the plural case of pattern b only in the lack of the determiner & quot ; the & quot ; or & quot ; a & quot ; . however , this difference proves critical in that pattern d tends to pick up & quot ; counting & quot ; nouns such as & quot ; truckload. & quot ; on the basis of this experiment we decided to proceed using only patterns a and b from table 1 . we use the ldc north american news corpus ( nanc ) . which is a compilation of the wire output of several us newspapers . the total corpus is about 100,000,000 words . we ran our program on the whole data set , which takes roughly four hours on our network . the bulk of that time ( around 90 % ) is spent tagging the corpus . as is typical in this sort of work , we assume that our evidence ( occurrences of patterns a and b ) is independently and identically distributed ( iid ) . we have found this assumption reasonable , but its breakdown has led to a few errors . in particular , a drawback of the nanc is the occurrence of repeated articles ; since the corpus consists of all of the articles that come over the wire , some days include multiple , updated versions of the same story , containing identical paragraphs or sentences . we wrote programs to weed out such cases , but ultimately found them of little use . first , & quot ; update & quot ; articles still have substantial variation , so there is a continuum between these and articles that are simply on the same topic . second , our data is so sparse that any such repeats are very unlikely to manifest themselves as repeated examples of part-type patterns . nevertheless since two or three occurrences of a word can make it rank highly , our results have a few anomalies that stem from failure of the lid assumption ( e.g. , quite appropriately , & quot ; clunker & quot ; ) . our seeds are one word ( such as & quot ; car & quot ; ) and its plural . we do not claim that all single words would fare as well as our seeds , as we picked highly probable words for our corpus ( such as & quot ; building & quot ; and & quot ; hospital & quot ; ) that we thought would have parts that might also be mentioned therein . with enough text , one could probably get reasonable results with any noun that met these criteria . the program has three phases . the first identifies and records all occurrences of patterns a and b in our corpus . the second filters out all words ending with & quot ; ing & quot ; , & quot ; ness & quot ; , or & quot ; ity & quot ; , since these suffixes typically occur in words that denote a quality rather than a physical object . finally we order the possible parts by the likelihood that they are true parts according to some appropriate metric . we took some care in the selection of this metric . at an intuitive level the metric should be something like p ( w ip ) . ( here and in what follows w denotes the outcome of the random variable generating wholes , and p the outcome for parts . w ( w ) states that w appears in the patterns ab as a whole , while p ( p ) states that p appears as a part . ) metrics of the form p ( w i p ) have the desirable property that they are invariant over p with radically different base frequencies , and for this reason have been widely used in corpus-based lexical semantic research [ 3,6,9 ] . however , in making this intuitive idea someone more precise we found two closely related versions : we call metrics based on the first of these & quot ; loosely conditioned & quot ; and those based on the second & quot ; strongly conditioned & quot ; . while invariance with respect to frequency is generally a good property , such invariant metrics can lead to bad results when used with sparse data . in particular , if a part word p has occurred only once in the data in the ab patterns , then perforce p ( w ip ) = 1 for the entity w with which it is paired . thus this metric must be tempered to take into account the quantity of data that supports its conclusion . to put this another way , we want to pick ( w , p ) pairs that have two properties , p ( w p ) is high and i to , p is large . we need a metric that combines these two desiderata in a natural way . we tried two such metrics . the first is dunning 's [ 10 ] log-likelihood metric which measures how & quot ; surprised & quot ; one would be to observe the data counts w , p - ' w , p i , i to , - ' p i and i - ' w , -19 i if one assumes that p ( w = p ( w ) . intuitively this will be high when the observed p ( w i p ) > > p ( w ) and when the counts supporting this calculation are large . the second metric is proposed by johnson ( personal communication ) . he suggests asking the question : how far apart can we be sure the distributions p ( w j p ) and p ( w ) are if we require a particular significance level , say .05 or .01 . we call this new test the & quot ; significant-difference & quot ; test , or sigdiff . johnson observes that compared to sigdiff , log-likelihood tends to overestimate the importance of data frequency at the expense of the distance between p ( w i p ) and p ( w ) . table 3 shows the 20 highest ranked words for each statistical method , using the seed word & quot ; car. & quot ; the first group contains the words found for the method we perceive as the most accurate , sigdiff and strong conditioning . the other groups show the differences between them and the first group . the + category means that this method adds the word to its list , β€” means the opposite . for example , & quot ; back & quot ; is on the sigdiff-loose list but not the sigdiff-strong list . in general , sigdiff worked better than surprise and strong conditioning worked better than loose conditioning . in both cases the less favored methods tend to promote words that are less specific ( & quot ; back & quot ; over & quot ; airbag & quot ; , & quot ; use & quot ; over & quot ; radiator & quot ; ) . furthermore , the combination of sigdiff and strong conditioning worked better than either by itself . thus all results in this paper , unless explicitly noted otherwise , were gathered using sigdiff and strong conditioning combined . we tested five subjects ( all of whom were unaware of our goals ) for their concept of a & quot ; part. & quot ; we asked them to rate sets of 100 words , of which 50 were in our final results set . tables 6 - 11 show the top 50 words for each of our six seed words along with the number of subjects who marked the word as a part of the seed concept . the score of individual words vary greatly but there was relative consensus on most words . we put an asterisk next to words that the majority subjects marked as correct . lacking a formal definition of part , we can only define those words as correct and the rest as wrong . while the scoring is admittedly not perfect ' , it provides an adequate reference result . table 4 summarizes these results . there we show the number of correct part words in the top 10 , 20 , 30 , 40 , and 50 parts for each seed ( e.g. , for & quot ; book & quot ; , 8 of the top 10 are parts , and 14 of the top 20 ) . overall , about 55 % of the top 50 words for each seed are parts , and about 70 % of the top 20 for each seed . the reader should also note that we tried one ambiguous word , & quot ; plant & quot ; to see what would happen . our program finds parts corresponding to both senses , though given the nature of our text , the industrial use is more common . our subjects marked both kinds of parts as correct , but even so , this produced the weakest part list of the six words we tried . as a baseline we also tried using as our & quot ; pattern & quot ; the head nouns that immediately surround our target word . we then applied the same & quot ; strong conditioning , sigdiff & quot ; statistical test to rank the candidates . this performed quite poorly . of the top 50 candidates for each target , only 8 % were parts , as opposed to the 55 % for our program . we also compared out parts list to those of wordnet . table 5 shows the parts of & quot ; car & quot ; in wordnet that are not in our top 20 ( + ) and the words in our top 20 that are not in wordnet ( β€” ) . there are definite tradeoffs , although we would argue that our top20 set is both more specific and more comprehensive . two notable words our top 20 lack are & quot ; engine & quot ; and & quot ; door & quot ; , both of which occur before 100 . more generally , all wordnet parts occur somewhere before 500 , with the exception of & quot ; tailfin & quot ; , which never occurs with car . it would seem that our program would be a good tool for expanding wordnet , as a person can to the entire statistical nlp group at brown , and scan and mark the list of part words in a few minutes . particularly to mark johnson , brian roark , gideon mann , and ana-maria popescu who provided invaluable help on the project . the program presented here can find parts of objects given a word denoting the whole object and a large corpus of unmarked text . the program is about 55 % accurate for the top 50 proposed parts for each of six examples upon which we tested it . there does not seem to be a single cause for the 45 % of the cases that are mistakes . we present here a few problems that have caught our attention . idiomatic phrases like & quot ; a jalopy of a car & quot ; or & quot ; the son of a gun & quot ; provide problems that are not easily weeded out . depending on the data , these phrases can be as prevalent as the legitimate parts . in some cases problems arose because of tagger mistakes . for example , & quot ; re-enactment & quot ; would be found as part of a & quot ; car & quot ; using pattern b in the phrase & quot ; the re-enactment of the car crash & quot ; if & quot ; crash & quot ; is tagged as a verb . the program had some tendency to find qualities of objects . for example , & quot ; driveability & quot ; is strongly correlated with car . we try to weed out most of the qualities by removing words with the suffixes & quot ; ness & quot ; , & quot ; ing & quot ; , and & quot ; ity. & quot ; the most persistent problem is sparse data , which is the source of most of the noise . more data would almost certainly allow us to produce better lists , both because the statistics we are currently collecting would be more accurate , but also because larger numbers would allow us to find other reliable indicators . for example , idiomatic phrases might be recognized as such . so we see & quot ; jalopy of a car & quot ; ( two times ) but not , of course , & quot ; the car 's jalopy & quot ; . words that appear in only one of the two patterns are suspect , but to use this rule we need sufficient counts on the good words to be sure we have a representative sample . at 100 million words , the nanc is not exactly small , but we were able to process it in about four hours with the machines at our disposal , so still larger corpora would not be out of the question . finally , as noted above , hearst [ 2 ] tried to find parts in corpora but did not achieve good results . she does not say what procedures were used , but assuming that the work closely paralleled her work on hyponyms , we suspect that our relative success was due to our very large corpus and the use of more refined statistical measures for ranking the output . this research was funded in part by nsf grant iri9319516 and onr grant n0014-96-1-0549 . thanks
finding parts in very large corpora we present a method for extracting parts of objects from wholes ( e.g . `` speedometer '' from `` car '' ) . given a very large corpus our method finds part words with 55 % accuracy for the top 50 words as ranked by the system . the part list could be scanned by an end-user and added to an existing ontology ( such as wordnet ) , or used as a part of a rough semantic lexicon . to filter out attributes that are regarded as qualities ( like driving ability ) rather than parts ( like steering wheels ) , we remove words ending with the suffixes -ness , -ing , and -ity .
the mathematics of statistical machine translation : parameter estimation we describe a series of five statistical models of the translation process and give algorithms for estimating the parameters of these models given a set of pairs of sentences that are translations of one another . we define a concept of word-by-word alignment between such pairs of sentences . for any given pair of such sentences each of our models assigns a probability to each of the possible word-by-word alignments . we give an algorithm for seeking the most probable of these alignments . although the algorithm is suboptimal , the alignment thus obtained accounts well for the word-by-word relationships in the pair of sentences . we have a great deal of data in french and english from the proceedings of the canadian parliament . accordingly , we have restricted our work to these two languages ; but we feel that because our algorithms have minimal linguistic content they would work well on other pairs of languages . we also feel , again because of the minimal linguistic content of our algorithms , that it is reasonable to argue that word-by-word alignments are inherent in any sufficiently large bilingual corpus . we describe a series of five statistical models of the translation process and give algorithms for estimating the parameters of these models given a set of pairs of sentences that are translations of one another . we define a concept of word-by-word alignment between such pairs of sentences . for any given pair of such sentences each of our models assigns a probability to each of the possible word-by-word alignments . we give an algorithm for seeking the most probable of these alignments . although the algorithm is suboptimal , the alignment thus obtained accounts well for the word-by-word relationships in the pair of sentences . we have a great deal of data in french and english from the proceedings of the canadian parliament . accordingly , we have restricted our work to these two languages ; but we feel that because our algorithms have minimal linguistic content they would work well on other pairs of languages . we also feel , again because of the minimal linguistic content of our algorithms , that it is reasonable to argue that word-by-word alignments are inherent in any sufficiently large bilingual corpus . the growing availability of bilingual , machine-readable texts has stimulated interest in methods for extracting linguistically valuable information from such texts . for example , a number of recent papers deal with the problem of automatically obtaining pairs of aligned sentences from parallel corpora ( warwick and russell 1990 ; brown , lai , and mercer 1991 ; gale and church 1991b ; kay 1991 ) . brown et al . ( 1990 ) assert , and brown , lai , and mercer ( 1991 ) and gale and church ( 1991b ) both show , that it is possible to obtain such aligned pairs of sentences without inspecting the words that the sentences contain . brown , lai , and mercer base their algorithm on the number of words that the sentences contain , while gale and church base a similar algorithm on the number of characters that the sentences contain . the lesson to be learned from these two efforts is that simple , statistical methods can be surprisingly successful in achieving linguistically interesting goals . here , we address a natural extension of that work : matching up the words within pairs of aligned sentences . in recent papers , brown et al . ( 1988 , 1990 ) propose a statistical approach to machine translation from french to english . in the latter of these papers , they sketch an algorithm for estimating the probability that an english word will be translated into any particular french word and show that such probabilities , once estimated , can be used together with a statistical model of the translation process to align the words in an english sentence with the words in its french translation ( see their figure 3 ) . pairs of sentences with words aligned in this way offer a valuable resource for work in bilingual lexicography and machine translation . section 2 is a synopsis of our statistical approach to machine translation . following this synopsis , we develop some terminology and notation for describing the word-byword alignment of pairs of sentences . in section 4 we describe our series of models of the translation process and give an informal discussion of the algorithms by which we estimate their parameters from data . we have written section 4 with two aims in mind : first , to provide the interested reader with sufficient detail to reproduce our results , and second , to hold the mathematics at the level of college calculus . a few more difficult parts of the discussion have been postponed to the appendix . in section 5 , we present results obtained by estimating the parameters for these models from a large collection of aligned pairs of sentences from the canadian hansard data ( brown , lai , and mercer 1991 ) . for a number of english words , we show translation probabilities that give convincing evidence of the power of statistical methods to extract linguistically interesting correlations from large corpora . we also show automatically derived word-by-word alignments for several sentences . in section 6 , we discuss some shortcomings of our models and propose modifications to address some of them . in the final section , we discuss the significance of our work and the possibility of extending it to other pairs of languages . finally , we include two appendices : one to summarize notation and one to collect the formulae for the various models that we describe and to fill an occasional gap in our development . in 1949 , warren weaver suggested applying the statistical and cryptanalytic techniques then emerging from the nascent field of communication theory to the problem of using computers to translate text from one natural language to another ( published in weaver 1955 ) . efforts in this direction were soon abandoned for various philosophical and theoretical reasons , but at a time when the most advanced computers were of a piece with today 's digital watch , any such approach was surely doomed to computational starvation . today , the fruitful application of statistical methods to the study of machine translation is within the computational grasp of anyone with a well-equipped workstation . a string of english words , e , can be translated into a string of french words in many different ways . often , knowing the broader context in which e occurs may serve to winnow the field of acceptable french translations , but even so , many acceptable translations will remain ; the choice among them is largely a matter of taste . in statistical translation , we take the view that every french string , f , is a possible translation of e. we assign to every pair of strings ( e , f ) a number pr ( f le ) , which we interpret as the probability that a translator , when presented with e , will produce f as his translation . we further take the view that when a native speaker of french produces a string of french words , he has actually conceived of a string of english words , which he translated mentally . given a french string f , the job of our translation system is to find the string e that the native speaker had in mind when he produced f. we minimize our chance of error by choosing that english string e for which pr ( e if ) is greatest . using bayes ' theorem , we can write since the denominator here is independent of e , finding Γͺ is the same as finding e so as to make the product pr ( e ) pr ( fl e ) as large as possible . we arrive , then , at the fundamental equation of machine translation : as a representation of the process by which a human being translates a passage from french to english , this equation is fanciful at best . one can hardly imagine someone rifling mentally through the list of all english passages computing the product of the a priori probability of the passage , pr ( e ) , and the conditional probability of the french passage given the english passage , poi e ) . instead , there is an overwhelming intuitive appeal to the idea that a translator proceeds by first understanding the french , and then expressing in english the meaning that he has thus grasped . many people have been guided by this intuitive picture when building machine translation systems . from a purely formal point of view , on the other hand , equation ( 2 ) is completely adequate . the conditional distribution poi e ) is just an enormous table that associates a real number between zero and one with every possible pairing of a french passage and an english passage . with the proper choice for this distribution , translations of arbitrarily high quality can be achieved . of course , to construct pr ( f le ) by examining individual pairs of french and english passages one by one is out of the question . even if we restrict our attention to passages no longer than a typical novel , there are just too many such pairs . but this is only a problem in practice , not in principle . the essential question for statistical translation , then , is not a philosophical one , but an empirical one : can one construct approximations to the distributions pr ( e ) and pr ( f le ) that are good enough to achieve an acceptable quality of translation ? equation ( 2 ) summarizes the three computational challenges presented by the practice of statistical translation : estimating the language model probability , pr ( e ) ; estimating the translation model probability , pr ( f le ) ; and devising an effective and efficient suboptimal search for the english string that maximizes their product . we call these the language modeling problem , the translation modeling problem , and the search problem . the language modeling problem for machine translation is essentially the same as that for speech recognition and has been dealt with elsewhere in that context ( see , for example , the recent paper by maltese and mancini [ 1992 ] and references therein ) . we hope to deal with the search problem in a later paper . in this paper , we focus on the translation modeling problem . before we turn to this problem , however , we should address an issue that may be a concern to some readers : why do we estimate pr ( e ) and poi e ) rather than estimate pr ( e if ) directly ? we are really interested in this latter probability . would n't we reduce our problems from three to two by this direct approach ? if we can estimate poi e ) adequately , why ca n't we just turn the whole process around to estimate pr ( el f ) ? to understand this , imagine that we divide french and english strings into those that are well-formed and those that are ill-formed . this is not a precise notion . we have in mind that strings like il va a la bibliotheque , or i live in a house , or even colorless green ideas sleep furiously are well-formed , but that strings like a la va il bibliotheque or a i in live house are not . when we translate a french string into english , we can think of ourselves as springing from a well-formed french string into the sea of well-formed english strings with the hope of landing on a good one . it is important , therefore , that our model for pr ( elf ) concentrate its probability as much as possible on wellformed english strings . but it is not important that our model for poi e ) concentrate its probability on well-formed french strings . if we were to reduce the probability of all well-formed french strings by the same factor , spreading the probability thus liberated over ill-formed french strings , there would be no effect on our translations : the argument that maximizes some function f ( x ) also maximizes cf ( x ) for any positive constant c. as we shall see below , our translation models are prodigal , spraying probability all over the place , most of it on ill-formed french strings . in fact , as we discuss in section 4.5 , two of our models waste much of their probability on things that are not strings at all , having , for example , several different second words but no first word . if we were to turn one of these models around to model pr ( elf ) directly , the result would be a model with so little probability concentrated on well-formed english strings as to confound any scheme to discover one . the two factors in equation ( 2 ) cooperate . the translation model probability is large for english strings , whether well- or ill-formed , that have the necessary words in them in roughly the right places to explain the french . the language model probability is large for well-formed english strings regardless of their connection to the french . together , they produce a large probability for well-formed english strings that account well for the french . we can not achieve this simply by reversing our translation models . we say that a pair of strings that are translations of one another form a translation , and we show this by enclosing the strings in parentheses and separating them by a vertical bar . thus , we write the translation ( qu'aurions-nous pu faire ? i what could we have done ? ) to show that what could we have done ? is a translation of qu'aurions-nous pu faire ? when the strings end in sentences , we usually omit the final stop unless it is a question mark or an exclamation point . brown et al . ( 1990 ) introduce the idea of an alignment between a pair of strings as an object indicating for each word in the french string that word in the english string from which it arose . alignments are shown graphically , as in figure 1 , by drawing lines , which we call connections , from some of the english words to some of the french words . the alignment in figure 1 has seven connections : ( the , le ) , ( program , programme ) , and so on . following the notation of brown et al. , we write this alignment as ( le programme a ete mis en application i and the ( 1 ) program ( 2 ) has ( 3 ) been ( 4 ) implemented ( 5,6,7 ) ) . the list of numbers following an english word shows the positions in the french string of the words to which it is connected . because and is not connected to any french words here , there is no list of numbers after it . we consider every alignment to be correct with some probability , and so we find ( le programme a ete mis en application i and ( 1,2,3,4,5,6,7 ) the program has been implemented ) perfectly acceptable . of course , we expect it to be much less probable than the alignment shown in figure 1 . in figure 1 each french word is connected to exactly one english word , but more general alignments are possible and may be appropriate for some translations . for example , we may have a french word connected to several english words as in figure 2 , which we write as ( le reste appartenait aux autoch tones i the ( 1 ) balance ( 2 ) was ( 3 ) the ( 3 ) territory ( 3 ) of ( 4 ) the ( 4 ) aboriginal ( 5 ) people ( 5 ) ) . more generally still , we may have several french words connected to several english words as in figure 3 , which we write as ( les pauvres sont demunis i the ( 1 ) poor ( 2 ) do n't ( 3,4 ) have ( 3,4 ) any ( 3,4 ) money ( 3,4 ) ) . here , the four english words do n't have any money work together to generate the two french words sont demunis . in a figurative sense , an english passage is a web of concepts woven together according to the rules of english grammar . when we look at a passage , we can not see the concepts directly but only the words that they leave behind . to show that these words are related to a concept but are not quite the whole story , we say that they form a cept . some of the words in a passage may participate in more than one cept , while others may participate in none , serving only as a sort of syntactic glue to bind the whole together . when a passage is translated into french , each of its cepts contributes some french words to the translation . we formalize this use of the term cept and relate it to the idea of an alignment as follows . we call the set of english words connected to a french word in a particular alignment the cept that generates the french word . thus , an alignment resolves an english string into a set of possibly overlapping cepts that we call the ceptual scheme of the english string with respect to the alignment . the alignment in figure 3 contains the three cepts the , poor , and do n't have any money . when one or more of the french words is connected to no english words , we say that the ceptual scheme includes the empty cept and that each of these words has been generated by this empty cept . formally , a cept is a subset of the positions in the english string together with the words occupying those positions . when we write the words that make up a cept , we sometimes affix a subscript to each one showing its position . the alignment in figure 2 includes the cepts thei and 016 the7 , but not the cepts of6 thei or the7 . in ( j'applaudis a la decision i 1 ( 1 ) applaud ( 2 ) the ( 4 ) decision ( 5 ) ) , a is generated by the empty cept . although the empty cept has no position , we place it by convention in position zero , and write it as eo . thus , we may also write the previous alignment as ( j'applaudis a la decision e0 ( 3 ) 1 ( 1 ) applaud ( 2 ) the ( 4 ) decision ( 5 ) ) . we denote the set of alignments of ( fl e ) by .,4 ( e , f ) . if e has length 1 and f has length in , there are /m different connections that can be drawn between them because each of the m french words can be connected to any of the 1 english words . since an alignment is determined by the connections that it contains , and since a subset of the possible connections can be chosen in 2 & quot ; ways , there are 2 & quot ; alignments in a ( e , f ) . in this section , we develop a series of five translation models together with the algorithms necessary to estimate their parameters . each model gives a prescription for computing the conditional probability pr ( f le ) , which we call the likelihood of the translation ( f , e ) . this likelihood is a function of a large number of free parameters that we must estimate in a process that we call training . the likelihood of a set of translations is the product of the likelihoods of its members . in broad outline , our plan is to guess values for these parameters and then to apply the em algorithm ( baum 1972 ; dempster , laird , and rubin 1977 ) iteratively so as to approach a local maximum of the likelihood of a particular set of translations that we call the training data . when the likelihood of the training data has more than one local maximum , the one that we approach will depend on our initial guess . in models 1 and 2 , we first choose a length for the french string , assuming all reasonable lengths to be equally likely . then , for each position in the french string , we decide how to connect it to the english string and what french word to place there . in model 1 we assume all connections for each french position to be equally likely . therefore , the order of the words in e and f does not affect pr ( fl e ) . in model 2 we make the more realistic assumption that the probability of a connection depends on the positions it connects and on the lengths of the two strings . therefore , for model 2 , pr ( f i e ) does depend on the order of the words in e and f. although it is possible to obtain interesting correlations between some pairs of frequent words in the two languages using models 1 and 2 , as we will see later ( in figure 5 ) , these models often lead to unsatisfactory alignments . in models 3 , 4 , and 5 , we develop the french string by choosing , for each word in the english string , first the number of words in the french string that will be connected to it , then the identity of these french words , and finally the actual positions in the french string that these words will occupy . it is this last step that determines the connections between the english string and the french string and it is here that these three models differ . in model 3 , as in model 2 , the probability of a connection depends on the positions that it connects and on the lengths of the english and french strings . in model 4 the probability of a connection depends in addition on the identities of the french and english words connected and on the positions of any other french words that are connected to the same english word . models 3 and 4 are deficient , a technical concept defined and discussed in section 4.5 . briefly , this means that they waste some of their probability on objects that are not french strings at all . model 5 is very much like model 4 , except that it is not deficient . models 1-4 serve as stepping stones to the training of model 5 . models 1 and 2 have an especially simple mathematical form so that iterations of the em algorithm can be computed exactly . that is , we can explicitly perform sums over all possible alignments for these two models . in addition , model 1 has a unique local maximum so that parameters derived for it in a series of em iterations do not depend on the starting point for the iterations . as explained below , we use model 1 to provide initial estimates for the parameters of model 2 . in model 2 and subsequent models , the likelihood function does not have a unique local maximum , but by initializing each model from the parameters of the model before it , we arrive at estimates of the parameters of the final model that do not depend on our initial estimates of the parameters for model 1 . in models 3 and 4 , we must be content with approximate em iterations because it is not feasible to carry out sums over all possible alignments for these models . but , while approaching more closely the complexity of model 5 , they retain enough simplicity to allow an efficient investigation of the neighborhood of probable alignments and therefore allow us to include what we hope are all of the important alignments in each em iteration . in the remainder of this section , we give an informal but reasonably precise description of each of the five models and an intuitive account of the em algorithm as applied to them . we assume the reader to be comfortable with lagrange multipliers , partial differentiation , and constrained optimization as they are presented in a typical college calculus text , and to have a nodding acquaintance with random variables . on the first time through , the reader may wish to jump from here directly to section 5 , returning to this section when and if he should desire to understand more deeply how the results reported later are achieved . the basic mathematical object with which we deal here is the joint probability distribution pr ( f = f , a = a , e = e ) , where the random variables f and e are a french string and an english string making up a translation , and the random variable a is an alignment between them . we also consider various marginal and conditional probability distributions that can be constructed from pr ( f = f , a = a , e = e ) , especially the distribution pr ( f = fl e = e ) . we generally follow the common convention of using uppercase letters to denote random variables and the corresponding lowercase letters to denote specific values that the random variables may take . we have already used 1 and in to represent the lengths of the strings e and f , and so we use l and m to denote the corresponding random variables . when there is no possibility for confusion , or , more properly , when the probability of confusion is not thereby materially increased , we write pr ( f , a , e ) for pr ( f f , a = a , e = e ) , and use similar shorthands throughout . we can write the likelihood of ( f le ) in terms of the conditional probability pr ( f , ale ) as the sum here , like all subsequent sums over a , is over the elements of a ( e , f ) . we restrict ourselves in this section to alignments like the one shown in figure 1 where each french word has exactly one connection . in this kind of alignment , each cept is either a single english word or it is empty . therefore , we can assign cepts to positions in the english string , reserving position zero .for the empty cept . if the english string , e = e1e2 ... e1 , has 1 words , and the french string , f = firn 1112 β€’ fm , has m words , then the alignment , a , can be represented by a series , ay ' a1a2 ... am , of m values , each between 0 and 1 such that if the word in position j of the french string is connected to the word in position i of the english string , then al = i , and if it is not connected to any english word , then a ] = 0 . without loss of generality , we can write this is only one of many ways in which pr ( f , ale ) can be written as the product of a series of conditional probabilities . it is important to realize that equation ( 4 ) is not an approximation . regardless of the form of pr ( f , ale ) , it can always be analyzed into a product of terms in this way . we are simply asserting in this equation that when we generate a french string together with an alignment from an english string , we can first choose the length of the french string given our knowledge of the english string . then we can choose where to connect the first position in the french string given our knowledge of the english string and the length of the french string . then we can choose the identity of the first word in the french string given our knowledge of the english string , the length of the french string , and the position in the english string to which the first position in the french string is connected , and so on . as we step through the french string , at each point we make our next choice given our complete knowledge of the english string and of all our previous choices as to the details of the french string and its alignment . the conditional probabilities on the right-hand side of equation ( 4 ) can not all be taken as independent parameters because there are too many of them . in model 1 , we assume that pr ( mle ) is independent of e and m ; that pr ( ajlail-1 , fil-17 rrt , e ) , depends only on 1 , .the length of the english string , and therefore must be ( 1 + 1 ) -1 ; and that pr ( fin , e ) depends only on f and eaj . the parameters , then , are € pr ( m le ) , i and t ( le ) e pr ( fi e ) , which we call the translation probability of fl given ea , . we think of e as some small , fixed number . the distribution of m , the length of the french string , is unnormalized but this is a minor technical issue of no significance to our computations . if we wish , we can think of m as having some finite range . as long as this range encompasses everything that actually occurs in training data , no problems arise . we turn now to the problem of estimating the translation probabilities for model 1 . the joint likelihood of a french string and an alignment given an english string is we wish to adjust the translation probabilities so as to maximize pr ( fle ) subject to the constraints that for each e , following standard practice for constrained maximization , we introduce lagrange multipliers a β€ž and seek an unconstrained extremum of the auxiliary function an extremum occurs when all of the partial derivatives of h with respect to the components of t and a are zero . that the partial derivatives with respect to the components of a be zero is simply a restatement of the constraints on the translation probabilities . the partial derivative of h with respect to vi e ) is where is the kronecker delta function , equal to one when both of its arguments are the same and equal to zero otherwise . this partial derivative will be zero provided that superficially , equation ( 10 ) looks like a solution to the extremum problem , but it is not because the translation probabilities appear on both sides of the equal sign . nonetheless , it suggests an iterative procedure for finding a solution : given an initial guess for the translation probabilities , we can evaluate the right-hand side of equation ( 10 ) and use the result as a new estimate for t ( f le ) . ( here and elsewhere , the lagrange multipliers simply serve as a reminder that we need to normalize the translation probabilities so that they satisfy equation ( 7 ) . ) this process , when applied repeatedly , is called the em algorithm . that it converges to a stationary point of h in situations like this was first shown by baum ( 1972 ) and later by others ( dempster , laird , and rubin 1977 ) . with the aid of equation ( 5 ) , we can reexpress equation ( 10 ) as number of times e connects to f in a we call the expected number of times that e connects to f in the translation ( fle ) the count off given e for ( fle ) and denote it by c ( f le ; f , e ) . by definition , where pr ( aje , f ) = pr ( f , ale ) / pr ( fle ) . if we replace a , by a , pr ( fle ) , then equation ( 11 ) can be written very compactly as in practice , our training data consists of a set of translations , ( f ( i ) ie ( 1 ) ) , ( f ( 2 ) le ( 2 ) ) ( f ( s ) ie ( s ) ) , so this equation becomes here , a , serves only as a reminder that the translation probabilities must be normalized . usually , it is not feasible to evaluate the expectation in equation ( 12 ) exactly . even when we exclude multi-word cepts , there are still ( 1 + 1 ) m alignments possible for ( fie ) . model 1 , however , is special because by recasting equation ( 6 ) , we arrive at an expression that can be evaluated efficiently . the right-hand side of equation ( 6 ) is a sum of terms each of which is a monomial in the translation probabilities . each monomial contains in translation probabilities , one for each of the words in i . different monomials correspond to different ways of connecting words in f to cepts in e with every way appearing exactly once . by direct evaluation , we see that an example may help to clarify this . suppose that m = 3 and 1 = 1 , and that we write as a shorthand for t ( fii et ) . then the left-hand side of equation ( 15 ) is t10 tzo t30 t10 t20 t31 + & quot ; +41 t21 t30 tll t21 t31 , and the right-hand side is ( tip +t11 ) ( t20 Β±t21 ) ( t30 +t31 ) β€’ it is routine to verify that these are the same . therefore , we can interchange the sums in equation ( 6 ) with the product to obtain if we use this expression in place of equation ( 6 ) when we write the auxiliary function in equation ( 8 ) , we find that count of e in e thus , the number of operations necessary to calculate a count is proportional to 1+ m rather than to ( 1 + 1 ) rn as equation ( 12 ) might suggest . using equations ( 14 ) and ( 17 ) , we can estimate the parameters t ( f le ) as follows . the details of our initial guesses for t ( f le ) are unimportant because pr ( fle ) has a unique local maximum for model 1 , as is shown in appendix b . we start with all of the t ( f le ) equal , but any other choice that avoids zeros would lead to the same final solution . in model 1 , we take no cognizance of where words appear in either string . the first word in the french string is just as likely to be connected to a word at the end of the english string as to one at the beginning . in model 2 we make the same assumptions as in model 1 except that we assume that pr ( aj lali-17a-17m7 e ) depends on j , al , and in , as well as on 1 . we introduce a set of alignment probabilities , therefore , we seek an unconstrained extremum of the auxiliary function the reader will easily verify that equations ( 11 ) , ( 13 ) , and ( 14 ) carry over from model 1 to model 2 unchanged . we need a new count , c ( iij , m , f , e ) , the expected number of times that the word in position j of f is connected to the word in position i of e. clearly , notice that if f ( s ) does not have length m or if e ( ' ) does not have length / , then the corresponding count is zero . as with the as in earlier equations , the its here serve simply to remind us that the alignment probabilities must be normalized . model 2 shares with model 1 the important property that the sums in equations ( 12 ) and ( 23 ) can be obtained efficiently . we can rewrite equation ( 21 ) as equation ( 27 ) has a double sum rather than the product of two single sums , as in equation ( 17 ) , because in equation ( 27 ) i and j are tied together through the alignment probabilities . model 1 is the special case of model 2 in which a ( ilj , m , 1 ) is held fixed at ( 1+ 1 ) -1 . therefore , any set of parameters for model 1 can be reinterpreted as a set of parameters for model 2 . taking as our initial estimates of the parameters for model 2 the parameter values that result from training model 1 is equivalent to computing the probabilities of all alignments as if we were dealing with model 1 , but then collecting the counts as if we were dealing with model 2 . the idea of computing the probabilities of the alignments using one model , but collecting the counts in a way appropriate to a second model is very general and can always be used to transfer a set of parameters from one model to another . we created models 1 and 2 by making various assumptions about the conditional probabilities that appear in equation ( 4 ) . as we have mentioned , equation ( 4 ) is an exact statement , but it is only one of many ways in which the joint likelihood of f and a can be written as a product of conditional probabilities . each such product corresponds in a natural way to a generative process for developing f and a from e. in the process corresponding to equation ( 4 ) , we first choose a length for f. next , we decide which position in e is connected to fi and what the identity of fi is . then , we decide which position in e is connected to 12 , and so on . for models 3 , 4 , and 5 , we write the joint likelihood as a product of conditional probabilities in a different way . casual inspection of some translations quickly establishes that the is usually translated into a single word ( le , la , or 1 ) , but is sometimes omitted ; or that only is often translated into one word ( for example , seulement ) , but sometimes into two ( for example , ne . . . que ) , and sometimes into none . the number of french words to which e is connected in a randomly selected alignment is a random variable , ( 13 β€ž that we call the fertility of e. each choice of the parameters in model 1 or model 2 determines a distribution , pr ( cl ) e = 0 ) , for this random variable . but the relationship is remote : just what change will be wrought in the distribution of 43the if , say , we adjust a ( 112 , 8,9 ) is not immediately clear . in models 3 , 4 , and 5 , we parameterize fertilities directly . as a prolegomenon to a detailed discussion of models 3 , 4 , and 5 , we describe the generative process upon which they are based . given an english string , e , we first decide the fertility of each word and a list of french words to connect to it . we call this list , which may be empty , a tablet . the collection of tablets is a random variable , t , that we call the tableau of e ; the tablet for the ith english word is a random variable , ti ; and the kth french word in the ith tablet is a random variable , tjk . after choosing the tableau , we permute its words to produce f. this permutation is a random variable , h. the position in f of the kth word in the ith tablet is yet another a random variable , ilzkβ€’ the joint likelihood for a tableau , t , and a permutation , 7r , is in this equation , r1c-1 represents the series of values , 7- , ; 7r , ik-1 represents the series of values 7ri1 , ,7rik_i ; and 0 , is shorthand for knowing t and 7r determines a french string and an alignment , but in general several different pairs r , may lead to the same pair f , a . we denote the set of such pairs by ( f , a ) . clearly , then two tableaux for one alignment . the number of elements in ( f , a ) is 1-11=0 oi ! because for each 7- , there are oz ! arrangements that lead to the pair f , a . figure 4 shows the two tableaux for ( bon marche cheap ( 1,2 ) ) . except for degenerate cases , there is one alignment in .4 ( e , f ) for which pr ( ale , f ) is greatest . we call this the viterbi alignment for ( f le ) and denote it by v ( f le ) . we know of no practical algorithm for finding v ( f le ) for a general model . indeed , if someone were to claim that he had found v ( f le ) , we know of no practical algorithm for demonstrating that he is correct . but for model 2 ( and , thus , also for model 1 ) , finding v ( fle ) is straightforward . for each j , we simply choose aj so as to make the product t ( fji ec , i ) a ( ailj , m , 1 ) as large as possible . the viterbi alignment depends on the model with respect to which it is computed . when we need to distinguish between the viterbi alignments for different models , we write v ( f le ; 1 ) , v ( fle ; 2 ) , and so on . we denote by β€ž 4 , ___1 ( e , f ) the set of alignments for which al = i . we say that ij is pegged in these alignments . by the pegged viterbi alignment for ij , which we write vi .. _1 ( f le ) , we mean that element of ai._1 ( e , f ) for which pr ( aie , f ) is greatest . obviously , we can find v β€ž _ ) ( fle ; 1 ) and 17 , ,_1 ( f le ; 2 ) quickly with a straightforward modification of the algorithm described above for finding v ( f le ; 1 ) and v ( f le ; 2 ) . model 3 is based on equation ( 29 ) . earlier , we were unable to treat each of the conditional probabilities on the right-hand side of equation ( 4 ) as a separate parameter . with equation ( 29 ) we are no better off and must again make assumptions to reduce the number of independent parameters . there are many different sets of assumptions that we might make , each leading to a different model for the translation process . in model 3 , we assume that , for i between 1 and / , pr ( oi 1011.-1 , e ) depends only on oi and ei ; that , for all i , pr ( rikirit-1 , 7-1 , 4 , e ) depends only on rik and ei ; and that , for i between 1 and / , pr ( 7rik 14-1 , t ( 1 , e ) depends only on 7rik , i , m , and / . the parameters of model 3 are thus a set of fertility probabilities , n ( cbl ei ) e pr ( 01011.-1 , e ) ; a set of translation probabilities , t ( f ei ) = pr ( tik e ) ; and a set of distortion probabilities , d ( jli , m , 1 ) a- pr ( 111k = jk1 74-1 , 7- ( 1 , 10 ( 1 ) , e ) . we treat the distortion and fertility probabilities for e0 differently . the empty cept conventionally occupies position 0 , but actually has no position . its purpose is to account for those words in the french string that can not readily be accounted for by other cepts in the english string . because we expect these words to be spread uniformly throughout the french string , and because they are placed only after all of the other peter f. brown et al . the mathematics of statistical machine translation words in the string have been placed , we assume that pr ( 110k+1 = jlir , 71 , to , e ) equals 0 unless position j is vacant in which case it equals ( 00 - k ) -1 . therefore , the contribution of the distortion probabilities for all of the words in to is 1/00 ! . we expect q50 to depend on the length of the french string because longer strings should have more extraneous words . therefore , we assume that for some pair of auxiliary parameters po and pi . the expression on the left-hand side of this equation depends on 01 only through the sum 01 + β€’ β€’ β€’ + 0/ and defines a probability distribution over o whenever po and pi are nonnegative and sum to 1 . we can interpret pr ( 00101 , e ) as follows . we imagine that each of the words from ti requires an extraneous word with probability pi and that this extraneous word must be connected to the empty cept . the probability that exactly cbo of the words from ti will require an extraneous word is just the expression given in equation ( 31 ) . as with models 1 and 2 , an alignment of ( fle ) is determined by specifying al for each position in the french string . the fertilities , 00 through 0/ , are functions of the ais : 0 , is equal to the number of js for which aj equals i therefore , with ef t ( f le ) = 1/ e1 d ( jii m/ 1 ) = 1 , e0 n ( 01 e ) = 1 , and po +pi = 1 . the assumptions that we make for model 3 are such that each of the pairs ( t , 7r ) in ( f , a ) makes an identical contribution to the sum in equation ( 30 ) . the factorials in equation ( 32 ) come from carrying out this sum explicitly . there is no factorial for the empty cept because it is exactly canceled by the contribution from the distortion probabilities . by now , the reader will be able to provide his or her own auxiliary function for seeking a constrained minimum of the likelihood of a translation with model 3 , but for completeness and to establish notation , we write h ( t , d , n , p , a , tt , v , pr ( fle ) - e a , ( f. t ( f 1 e ) - 1 ) - eitina ( ed ( j1i , m , 1 ) - 1 ) _eve ( n ( 01 e ) - 1 ) - ( po + pi -1 ) . ( 33 ) following the trail blazed with models 1 and 2 , we define the counts the counts in these last two equations correspond to the parameters po and p1 that determine the fertility of the empty cept in the english string . the reestimation formulae for model 3 are equations ( 34 ) and ( 39 ) are identical to equations ( 12 ) and ( 14 ) and are repeated here only for convenience . equations ( 35 ) and ( 40 ) are similar to equations ( 23 ) and ( 25 ) , but a ( iij , m , 1 ) differs from d ( jii , m , 1 ) in that the former sums to unity over all i for fixed j while the latter sums to unity over all j for fixed i. equations ( 36 ) , ( 37 ) , ( 38 ) , ( 41 ) , and ( 42 ) , for the fertility parameters , are new . the trick that allows us to evaluate the right-hand sides of equations ( 12 ) and ( 23 ) efficiently for model 2 does not work for model 3 . because of the fertility parameters , we can not exchange the sums over al through am with the product over j in equation ( 32 ) as we were able to for equations ( 6 ) and ( 21 ) . we are not , however , entirely bereft of hope . the alignment is a useful device precisely because some alignments are much more probable than others . our strategy is to carry out the sums in equations ( 32 ) and ( 34 ) - ( 38 ) only over some of the more probable alignments , ignoring the vast sea of much less probable ones . specifically , we begin with the most probable alignment that we can find and then include all alignments that can be obtained from it by small changes . to define unambiguously the subset , s , of the elements of a ( fle ) over which we evaluate the sums , we need yet more terminology . we say that two alignments , a and a ' , differ by a move if there is exactly one value of j for which ai 0 . we say that they differ by a swap if aj = ai ' except at two values , ii and j2 , for which ( 111 = al , ' and = a111 . we say that two alignments are neighbors if they are identical or differ by a move or by a swap . we denote the set of all neighbors of a by .ai ( a ) . let b ( a ) be that neighbor of a for which the likelihood pr ( b ( a ) lf , e ) is greatest . suppose that ij is pegged for a . among the neighbors of a for which ij is also pegged , let b , +_1 ( a ) be that for which the likelihood is greatest . the sequence of alignments a , b ( a ) , b2 ( a ) b ( b ( a ) ) , . . . , converges in a finite number of steps to an alignment that we write as bcΒ° ( a ) . similarly , if ij is pegged for a , the sequence of alignments a , notice that op is the fertility of the word in position i ' for alignment a . the fertility of this word in alignment a ' is 0 , , + 1 . similar equations can be easily derived when either i or i ' is zero , or when a and a ' differ by a swap . we leave the details to the reader . with these preliminaries , we define s by s = ar ( r ) ( v ( fl e ; 2 ) ) ) u v ( fle ; 2 ) ) ) . ( 44 ) in this equation , we use b°° ( v ( f le ; 2 ) ) and lfΒ° j ( v , -1 ( f le ; 2 ) ) as handy approximations to v ( f le ; 3 ) and viβ€”j ( f le ; 3 ) , neither of which we are able to compute efficiently . in one iteration of the em algorithm for model 3 , we compute the counts in equations ( 34 ) β€” ( 38 ) , summing only over elements of s , and then use these counts in equations ( 39 ) β€” ( 42 ) to obtain a new set of parameters . if the error made by including only some of the elements of β€ž 4 ( e , f ) is not too great , this iteration will lead to values of the parameters for which the likelihood of the training data is at least as large as for the first set of parameters . we make no initial guess of the parameters for model 3 , but instead adapt the parameters from the final iteration of the em algorithm for model 2 . that is , we compute the counts in equations ( 34 ) β€” ( 38 ) using model 2 to evaluate pr ( ai e , f ) . the simple form of model 2 again makes exact calculation feasible . we can readily adapt equations ( 27 ) and ( 28 ) to compute counts for the translation and distortion probabilities ; efficient calculation of the fertility counts is more involved , and we defer a discussion of it to appendix b . the reader will have noticed a problem with our parameterization of the distortion probabilities in model 3 : whereas we can see by inspection that the sum over all pairs y , 7 of the expression on the right-hand side of equation ( 29 ) is unity , it is equally clear that this can no longer be the case if we assume that pr ( llik = e ) depends only on ] , i , m , and 1 for i > 0 . because the distortion probabilities for assigning positions to later words do not depend on the positions assigned to earlier words , model 3 wastes some of its probability on what we might call generalized strings , i.e. , strings that have some positions with several words and others with none . when a model has this property of not concentrating all of its probability on events of interest , we say that it is deficient . deficiency is the price that we pay for the simplicity that allows us to write equation ( 43 ) . deficiency poses no serious problem here . although models 1 and 2 are not technically deficient , they are surely spiritually deficient . each assigns the same probability to the alignments ( je n'ai pas de stylo 1 i ( 1 ) do not ( 2,4 ) have ( 3 ) a ( 5 ) pen ( 6 ) ) and ( je pas ai ne de stylo 1 i ( 1 ) do not ( 2,4 ) have ( 3 ) a ( 5 ) pen ( 6 ) ) , and , therefore , essentially the same probability to the translations ( je n'ai pas de stylo 1 i do not have a pen ) and ( je pas ai ne de stylo 11 do not have a pen ) . in each case , not produces two words , ne and pas , and in each case , one of these words ends up in the second position of the french string and the other in the fourth position . the first translation should be much more probable than the second , but this defect is of little concern because while we might have to translate the first string someday , we will never have to translate the second . we do not use our translation models to predict french given english but rather as a component of a system designed to predict english given french . they need only be accurate to within a constant factor over well-formed strings of french words . often the words in an english string constitute phrases that are translated as units into french . sometimes , a translated phrase may appear at a spot in the french string different from that at which the corresponding english phrase appears in the english string . the distortion probabilities of model 3 do not account well for this tendency of phrases to move around as units . movement of a long phrase will be much less likely than movement of a short phrase because each word must be moved independently . in model 4 , we modify our treatment of pr ( ii , k = 74-1 , 0 ( 1 ) e ) so as to alleviate this problem . words that are connected to the empty cept do not usually form phrases , and so we continue to assume that these words are spread uniformly throughout the french string . as we have described , an alignment resolves an english string into a ceptual scheme consisting of a set of possibly overlapping cepts . each of these cepts then accounts for one or more french words . in model 3 the ceptual scheme for an alignment is determined by the fertilities of the words : a word is a cept if its fertility is greater than zero . the empty cept is a part of the ceptual scheme if 00 is greater than zero . as before we exclude multi-word cepts . among the one-word cepts , there is a natural order corresponding to the order in which they appear in the english string . let [ ii denote the position in the english string of the ith one-word cept . we define the center of this cept , 0 β€ž to be the ceiling of the average value of the positions in the french string of the words from its tablet . we define its head to be that word in its tablet for which the position in the french string is smallest . in model 4 , we replace rn , 1 ) by two sets of parameters : one for placing the head of each cept , and one for placing any remaining words . for [ i ] > 0 , we require that the head for cept i be r [ ] i and we assume that pr ( ii [ ] i = /1741-1,71 , of ) , e ) = - oi-i ia ( e [ i-i ] ) , 8 ( 4 ) ) . ( 45 ) here , a and 8 are functions of the english and french words that take on a small number of different values as their arguments range over their respective vocabularies . brown et al . ( 1990 ) describe an algorithm for dividing a vocabulary into classes so as to preserve mutual information between adjacent classes in running text . we construct a and b as functions with 50 distinct values by dividing the english and french vocabularies each into 50 classes according to this algorithm . by assuming that the probability depends on the previous cept and on the identity of the french word being placed , we can account for such facts as the appearance of adjectives before nouns in english but after them in french . we call j - 01-1 the displacement for the head of cept i . it may be either positive or negative . we expect di ( -1 ia ( e ) , bm ) to be larger than d1 ( + 11a ( e ) ,8 ( f ) ) when e is an adjective and f is a noun . indeed , this is borne out in the trained distortion probabilities for model 4 , where we find that di ( -1 ia ( government ' s ) , ( developpement ) ) is 0.7986 , while d1 ( -i- 1 ia ( government ' s ) , b ( developpement ) ) is 0.0168 . suppose , now , that we wish to place the kth word of cept i for [ i ] > 0 , k > 1 . we assume that we require that 'irk '' , be greater than r [ ] k_i . some english words tend to produce a series of french words that belong together , while others tend to produce a series of words that should be separate . for example , implemented can produce mis en application , which usually occurs as a unit , but not can produce ne pas , which often occurs with an intervening verb . we expect d > 1 ( 2ib ( pas ) ) to be relatively large compared with d > i ( 218 ( en ) ) . after training , we find that d > i ( 2i13 ( pas ) ) is 0.6847 and d > 1 ( 215 ( en ) ) is 0.1533 . whereas we assume that 7-ni can be placed either before or after any previously positioned words , we require subsequent words from tn to be placed in order . this does not mean that they must occupy consecutive positions but only that the second word from tn must lie to the right of the first , the third to the right of the second , and so on . because of this , only one of the om ! arrangements of r [ t ] is possible . we leave the routine details of deriving the count and reestimation formulae for model 4 to the reader . he may find the general formulae in appendix b helpful . once again , the several counts for a translation are expectations of various quantities over the possible alignments with the probability of each alignment computed from an earlier estimate of the parameters . as with model 3 , we know of no trick for evaluating these expectations and must rely on sampling some small set , s , of alignments . as described above , the simple form that we assume for the distortion probabilities in model 3 makes it possible for us to find pc ' ( a ) rapidly for any a . the analog of equation ( 43 ) for model 4 is complicated by the fact that when we move a french word from cept to cept we change the centers of two cepts and may affect the contribution of several words . it is nonetheless possible to evaluate the adjusted likelihood incrementally , although it is substantially more time-consuming . faced with this unpleasant situation , we proceed as follows . let the neighbors of a be ranked so that the first is the neighbor for which pr ( ale , f ; 3 ) is greatest , the second the one for which pr ( ale , f ; 3 ) is next greatest , and so on . we define b ( a ) to be the highest-ranking neighbor of a for which pr ( b- ( a ) le , f ; 4 ) is at least as large as pr ( ale , f ; 4 ) . we define -6 β€ž _1 ( a ) analogously . here , pr ( ale , f ; 3 ) means pr ( ale , f ) as computed with model 3 , and pr ( ale , f ; 4 ) means pr ( ale , f ) as computed with model 4 . we define s for model 4 by n- ( b & quot ; ( v ( f le ; 2 ) ) ) u ( fie ; 2 ) ) ) . ( 47 ) this equation is identical to equation ( 47 ) except that b has been replaced by b . models 3 and 4 are both deficient . in model 4 , not only can several words lie on top of one another , but words can be placed before the first position or beyond the last position in the french string . we remove this deficiency in model 5 . after we have placed the words for 41-1 and r [ i ] ik-1 there will remain some vacant positions in the french string . obviously , t [ i ] k should be placed in one of these vacancies . models 3 and 4 are deficient precisely because we fail to enforce this constraint for the one-word cepts . let v ( j , 41-1 , t [ ] l & quot ; ) be the number of vacancies up to and including position j just before we place ttipc . in the interest of notational brevity , a noble but elusive goal , we write this simply as v1 . we retain two sets of distortion parameters , as in model 4 , and continue to refer to them as d1 and d > 1 . we assume that , for [ ii > 0 , = 7- ( ! , , 0 ( 1 ) , e ) = ( vilb ( .6 ) , voi_i , vin - + 1 ) ( 1 - 6 ( v1 , vj-i ) ) β€’ ( 48 ) the number of vacancies up to j is the same as the number of vacancies up to j - 1 only when j is not itself vacant . the last factor , therefore , is 1 when j is vacant and 0 otherwise . in the final parameter of d1 , um is the number of vacancies remaining in the french string . if on = 1 , then rto may be placed in any of these vacancies ; if oki = 2 , 7-ni may be placed in any but the last of these vacancies ; in general , r [ i ] i may be placed in any but the rightmost on - 1 of the remaining vacancies . because rto must occupy the leftmost place of any of the words from tn , we must take care to leave room at the end of the string for the remaining words from this tablet . as with model 4 , we allow d1 to depend on the center of the previous cept and on fj , but we suppress the dependence on eu_ii since we should otherwise have too many parameters . for [ ii > 0 and k > 1 , we assume again , the final factor enforces the constraint that ttipc land in a vacant position , and , again , we assume that the probability depends on 4 only through its class . model 5 is described in more detail in appendix b . as with model 4 , we leave the details of the count and reestimation formulae to the reader . no incremental evaluation of the likelihood of neighbors is possible with model 5 because a move or swap may require wholesale recomputation of the likelihood of an alignment . therefore , when we evaluate expectations for model 5 , we include only the alignments in s as defined in equation ( 47 ) . we further trim these alignments by removing any alignment a , for which pr ( a le , f ; 4 ) is too much smaller than pr ( b . ' ( v ( f i e ; 2 ) le , f ; 4 ) . model 5 is a powerful but unwieldy ally in the battle to align translations . it must be led to the battlefield by its weaker but more agile brethren models 2 , 3 , and 4 . in fact , this is the raison d'Γͺtre of these models . to keep them aware of the lay of the land , we adjust their parameters as we carry out iterations of the em algorithm for model 5 . that is , we collect counts for models 2 , 3 , and 4 by summing over alignments as determined by the abbreviated s described above , using model 5 to compute pr ( a i e , f ) . although this appears to increase the storage necessary for maintaining counts as we proceed through the training data , the extra burden is small because the overwhelming majority of the storage is devoted to counts for t ( f i e ) , and these are the same for models 2 , 3 , 4 , and 5 . we have used a large collection of training data to estimate the parameters of the models described above . brown , lai , and mercer ( 1991 ) have described an algorithm with which one can reliably extract french and english sentences that are translations of one another from parallel corpora . they used the algorithm to extract a large number of translations from several years of the proceedings of the canadian parliament . from these translations , we have chosen as our training data those for which both the english sentence and the french sentence are 30 or fewer words in length . this is a collection of 1,778,620 translations . in an effort to eliminate some of the typographical errors that abound in the text , we have chosen as our english vocabulary all of those words that appear at least twice in english sentences in our data , and as our french vocabulary all of those words that appear at least twice in french sentences in our data . all other words we replace with a special unknown english word or unknown french word accordingly as they appear in an english sentence or a french sentence . we arrive in this way at an english vocabulary of 42,005 words and a french vocabulary of 58,016 words . some typographical errors are quite frequent , for example , momento for memento , and so our vocabularies are not completely free of them . at the same time , some words are truly rare , and so we have , in some cases , snubbed legitimate words . adding eo to the english vocabulary brings it to 42,006 words . we have carried out 12 iterations of the em algorithm for this data . we initialized the process by setting each of the 2 , 437 , 020 , 096 translation probabilities , t ( f i e ) , to 1/58,016 . that is , we assume each of the 58,016 words in the french vocabulary to be equally likely as a translation for each of the 42,006 words in the english vocabulary . for t ( f le ) to be greater than zero at the maximum likelihood solution for one of our models , f and e must occur together in at least one of the translations in our training data . this is the case for only 25 , 427 , 016 pairs , or about one percent of all translation probabilities . on the average , then , each english word appears with about 605 french words . table 1 summarizes our training computation . at each iteration , we compute the probabilities of the various alignments of each translation using one model , and collect counts using a second ( possibly different ) model . these are referred to in the table as the in model and the out model , respectively . after each iteration , we retain individual values only for those translation probabilities that surpass a threshold ; the remainder we set to a small value ( 10-12 ) . this value is so small that it does not affect the normalization conditions , but is large enough that translation probabilities can be resurrected during later iterations . we see in columns 4 and 5 that even though we lower the threshold as iterations progress , fewer and fewer probabilities survive . by the final iteration , only 1 , 658 , 364 probabilities survive , an average of about 39 french words for each english word . although the entire t array has 2 , 437 , 020 , 096 entries , and we need to store it twice , once as probabilities and once as counts , it is clear from the preceeding remarks that we need never deal with more than about 25 million counts or about 12 million probabilities . we store these two arrays using standard sparse matrix techniques . we keep counts as pairs of bytes , but allow for overflow into 4 bytes if necessary . in this way , it is possible to run the training program in less than 100 megabytes of memory . while this number would have seemed extravagant a few years ago , today it is available at modest cost in a personal workstation . as we have described , when the in model is neither model 1 nor model 2 , we evaluate the count sums over only some of the possible alignments . many of these alignments have a probability much smaller than that of the viterbi alignment . the column headed alignments in table 1 shows the average number of alignments for which the probability is within a factor of 25 of the probability of the viterbi alignment in each iteration . as this number drops , the model concentrates more and more probability onto fewer and fewer alignments so that the viterbi alignment becomes ever more dominant . the last column in the table shows the perplexity of the french text given the english text for the in model of the iteration . we expect the likelihood of the training data to increase with each iteration . we can think of this likelihood as arising from a product of factors , one for each french word in the training data . we have 28,850 , 104 french words in our training data , so the 28,850 , 104th root of the likelihood is the average factor by which the likelihood is reduced for each additional french word . the reciprocal of this root is the perplexity shown in the table . as the likelihood increases , the perplexity decreases . we see a steady decrease in perplexity as the iterations progress except when we switch from model 2 as the in model to model 3 . this sudden jump is not because model 3 is a poorer model than model 2 , but because model 3 is deficient : the great majority of its probability is squandered on objects that are not strings of french words . as we have argued , deficiency is not a problem . in our description of model 1 , we left pr ( m i e ) unspecified . in quoting perplexities for models 1 and 2 , we have assumed that the length of the french string is poisson with a mean that is a linear function of the length of the english string . specifically , we have assumed that pr ( m = m le ) = ( a oine-al i m ! , with a equal to 1.09 . it is interesting to see how the viterbi alignments change as the iterations progress . in figure 5 , we show for several sentences the viterbi alignment after iterations 1 , 6 , 7 , and 12 . iteration 1 is the first iteration for model 2 , and iterations 6 , 7 , and 12 are the final iterations for models 2 , 3 , and 5 , respectively . in each example , we show the french sentence with a subscript affixed to each word to ease the reader 's task in interpreting the list of numbers after each english word . in the first example , al me semble faire signe que oui i it seems to me that he is nodding ) , two interesting changes evolve over the course of the iterations . in the alignment for model 1 , ii is correctly connected to he , but in all later alignments il is incorrectly connected to it . models 2 , 3 , and 5 discount a connection of he to il because it is quite far away . we do not yet have a model with sufficient linguistic sophistication to make this connection properly . on the other hand , we see that nodding , which in models 1 , 2 , and 3 is connected only to signe and oui , is correctly connected to the entire phrase faire signe que oui in model 5 . in the second example , ( voyez les profits que ils ont realises i look at the profits they have made ) , models 1 , 2 , and 3 incorrectly connect profits4 to both profits3 and realises7 , but with model 5 , profits4 is correctly connected only to profits3 , and made7 is connected to rea/ises7 . finally , in ( de les promesses , de les promesses ! i promises , promises . ) , promisesi is connected to both instances of promesses with model 1 ; promises3 is connected to most of the french sentence with model 2 ; the final punctuation of the english sentence is connected to both the exclamation point and , curiously , to de5 with model 3 ; and only with model 5 do we have a satisfying alignment of the two sentences . the orthography for the french sentence in the second example is voyez les profits qu'ils ont realises and in the third example is des promesses , des promesses ! we have restored the e to the end figure 5 the progress of alignments with iteration . of qu ' and have twice analyzed des into its constituents , de and les . we commit these and other petty pseudographic improprieties in the interest of regularizing the french text . in all cases , orthographic french can be recovered by rule from our corrupted versions . figures 6-15 show the translation probabilities and fertilities after the final iteration of training for a number of english words . we show all and only those probabilities that are greater than 0.01 . some words , like nodding , in figure 6 , do not slip gracefully into french . thus , we have translations like ( ii fait signe que oui i he is nodding ) , ( 11 fait un signe de la tete i he is nodding ) , ( ii fait un signe de tete affirmatif i he is nodding ) , or ( ii hoche la tete affirmativement i he is nodding ) . as a result , nodding frequently has a large fertility and spreads its translation probability over a variety of words . in french , what is worth saying is worth saying in many different ways . we see another facet of this with words like should , in figure 7 , which rarely has a fertility greater than one but still produces many different words , among them devrait , devraient , devrions , doit , doivent , devons , and devrais . these are ( just a fraction of the many ) forms of the french verb devoir . adjectives fare a little better : national , in figure 8 , almost never produces more than one word and confines itself to one of nationale , national , nationaux , and nationales , respectively the feminine , the masculine , the masculine plural , and the feminine plural of the corresponding french adjective . it is clear that our models would benefit from some kind of morphological processing to rein in the lexical exuberance of french . we see from the data for the , in figure 9 , that it produces le , la , les , and l ' as we would expect . its fertility is usually 1 , but in some situations english prefers an article where french does not and so about 14 % of the time its fertility is 0 . sometimes , as with farmers , in figure 10 , it is french that prefers the article . when this happens , the english noun trains to produce its translation together with an article . thus , farmers translation and fertility probabilities for nodding . typically has a fertility 2 and usually produces either agriculteurs or les . we include additional examples in figures 11 through 15 , which show the translation and fertility probabilities for external , answer , oil , former , and not . although we show the various probabilities to three decimal places , one must realize that the specific numbers that appear are peculiar to the training data that we used in obtaining them . they are not constants of nature relating the platonic ideals of eternal english and eternal french . had we used different sentences as training data , we might well have arrived at different numbers . for example , in figure 9 , we see that t ( lelthe ) = 0.497 while the corresponding number from figure 4 of brown et al . ( 1990 ) is 0.610 . the difference arises not from some instability in the training algorithms or some subtle shift in the languages in recent years , but from the fact that we have used 1,778,620 pairs of sentences covering virtually the complete vocabulary of the hansard data for training , while they used only 40,000 pairs of sentences and restricted their attention to the 9,000 most common words in each of the two vocabularies . figures 16 , 17 , and 18 show automatically derived alignments for three translations . in the terminology of section 4.6 , each alignment is b & quot ; ( v ( fle ; 2 ) ) . we stress that these alignments have been found by an algorithm that involves no explicit knowledge of either french or english . every fact adduced to support them has been discovered algorithmically from the 1 , 778 , 620 translations that constitute our training data . this data , in turn , is the product of an algorithm the sole linguistic input of which is a set of rules explaining how to find sentence boundaries in the two languages . we may justifiably claim , therefore , that these alignments are inherent in the canadian hansard data itself . in the alignment shown in figure 16 , all but one of the english words has fertility 1 . the final prepositional phrase has been moved to the front of the french sentence , but otherwise the translation is almost verbatim . notice , however , that the new proposal has been translated into les nouvelles propositions , demonstrating that number is not an invariant under translation . the empty cept has fertility 5 here . it generates eni , de3 , the comma , de16 , and dela . f t ( f i e ) 0 n ( 0 i e ) le 0.497 1 0.746 la 0.207 0 0.254 les 0.155 l ' 0.086 ce 0.018 cette 0.011 translation and fertility probabilities for the . f t ( f l e ) 0 n ( 0 i e ) agriculteurs 0.442 2 0.731 les 0.418 1 0.228 cultivateurs 0.046 0 0.039 producteurs 0.021 translation and fertility probabilities for farmers . translation and fertility probabilities for oil . translation and fertility probabilities for not . in figure 17 , two of the english words have fertility 0 , one has fertility 2 , and one , embattled , has fertility 5 . embattled is another word , like nodding , that eludes the french grasp and comes with a panoply of multi-word translations . the final example , in figure 18 , has several features that bear comment . the second word , speaker , is connected to the sequence l'orateur . like farmers above , it has trained to produce both the word that we naturally think of as its translation and the associated article . in our data , speaker always has fertility 2 and produces equally often l'orateur and le president . later in the sentence , starred is connected to the phrase marquees de un asterisque . from an initial situation in which each french word is equally probable as a translation of starred , we have arrived , through training , at a situation where it is possible to connect starred to just the right string of four words . near the end of the sentence , give is connected to donnerai , the first person singular future of donner , which means to give . we should be more comfortable if both will and give were connected to donnerai , but by limiting cepts to no more than one word , we have precluded this possibility . finally , the last 12 words of the english sentence , i now have the answer and will give it to the house , clearly correspond to the last 7 words of the french sentence , je donnerai la reponse a la chambre , but , literally , the french is i will give the answer to the house . there is nothing about now , have , and , or it , and each of these words has fertility 0 . translations that are as far as this from the literal are rather more the rule than the exception in our training data . one might cavil at the connection of la reponse to the answer rather than to it . we do not . models 1-5 provide an effective means for obtaining word-by-word alignments of translations , but as a means to achieve our real goal , which is translation , there is the best of 1.9 x 1025 alignments . the best of 8.4 x 1029 alignments . the best of 5.6 x 1031 alignments . room for improvement . we have seen that by ignoring the morphological structure of the two languages we dilute the strength of our statistical model , explaining , for example , each of the several tens of forms of each french verb independently . we have seen that by ignoring multi-word cepts , we are forced to give a false , or at least an unsatisfactory , account of some features in many translations . and finally , we have seen that our models are deficient , either in fact , as with models 3 and 4 , or in spirit , as with models 1 , 2 , and 5 . we have argued in section 2 that neither spiritual nor actual deficiency poses a serious problem , but this is not entirely true . let w ( e ) be the sum of pr ( f i e ) over well-formed french strings and let i ( e ) be the sum over ill-formed french strings . lit a deficient model , w ( e ) + i ( e ) < 1 . we say that the remainder of the probability is concentrated on the event failure and we write w ( e ) + i ( e ) + pr ( failureje ) = 1 . clearly , a model is deficient precisely when pr ( failurele ) > 0 . if pr ( failurele ) 0 , but i ( e ) > 0 , then the model is spiritually deficient . if w ( e ) were independent of e , neither form of deficiency would pose a problem , but because our models have no long-term constraints , w ( e ) decreases exponentially with 1 . when computing alignments , even this creates no problem because e and f are known . if , however , we are given f and asked to discover e , then we will find that the product pr ( e ) e ) is too small for long english strings as compared with short ones . as a result , we will improperly favor short english strings . we can counteract this tendency in part by replacing pr ( fle ) with c/ poi e ) for some empirically chosen constant c. this is treatment of the symptom rather than treatment of the disease itself , but it offers some temporary relief . the cure lies in better modeling . as we progress from model 1 to model 5 , evaluating the expectations that give us counts becomes increasingly difficult . for models 1 and 2 , we are able to include the contribution of each of the ( 1 + 1 ) rn possible alignments exactly . for later models , we include the contributions of fewer and fewer alignments . because most of the probability for each translation is concentrated by these models on a small number of alignments , this suboptimal procedure , mandated by the complexity of the models , yields acceptable results . in the limit , we can contemplate evaluating the expectations using only a single , probable alignment for each translation . when that alignment is the viterbi alignment , we call this viterbi training . it is easy to see that viterbi training converges : at each step , we reestimate parameters so as to make the current set of viterbi alignments as probable as possible ; when we use these parameters to compute a new set of viterbi alignments , we find either the old set or a set that is yet more probable . since the probability can never be greater than one , this process must converge . in fact , unlike the em algorithm in general , it must converge in a finite , though impractically large , number of steps because each translation has only a finite number of alignments . in practice , we are never sure that we have found the viterbi alignment . if we reinterpret the viterbi alignment to mean the most probable alignment that we can find rather than the most probable alignment that exists , then a similarly reinterpreted viterbi training algorithm still converges . we have already used this algorithm successfully as a part of a system to assign senses to english and french words on the basis of the context in which they appear ( brown et al . 1991a , 1991b ) . we expect to use it in models that we develop beyond model 5 . in models 1-5 , we restrict our attention to alignments with cepts containing no more than one word each . except in models 4 and 5 , cepts play little role in our development . even in these models , cepts are determined implicitly by the fertilities of the words in the alignment : words for which the fertility is greater than zero make up one-word cepts ; those for which it is zero do not . we can easily extend the generative process upon which models 3 , 4 , and 5 are based to encompass multi-word cepts . we need only include a step for selecting the ceptual scheme and ascribe fertilities to cepts rather than to words , requiring that the fertility of each cept be greater than zero . then , in equation ( 29 ) , we can replace the products over words in an english string with products over cepts in the ceptual scheme . when we venture beyond one-word cepts , however , we must tread lightly . an english string can contain any of 42,005 one-word cepts , but there are more than 1.7 billion possible two-word cepts , more than 74 trillion three-word cepts , and so on . clearly , one must be discriminating in choosing potential multi-word cepts . the caution that we have displayed thus far in limiting ourselves to cepts with fewer than two words was motivated primarily by our respect for the featureless desert that multi-word cepts offer a priori . the viterbi alignments that we have computed with model 5 give us a frame of reference from which to expand our horizons to multi-word cepts . by inspecting them , we can find translations for a given multi-word sequence . we need only promote a multi-word sequence to cepthood when these translations differ substantially from what we might expect on the basis of the individual words that it contains . in english , either a boat or a person can be left high and dry , but in french , un bateau is not left haut et sec , nor une personne haute et seche . rather , a boat is left echoue and a person en plan . high and dry , therefore , is a promising three-word cept because its translation is not compositional . we treat each distinct sequence of letters as a distinct word . in english , for example , we recognize no kinship among the several forms of the verb to eat ( eat , ate , eaten , eats , and eating ) . in french , irregular verbs have many forms . in figure 7 , we have already seen 7 forms of devoir . altogether , it has 41 different forms . and there would be 42 if the french did not inexplicably drop the circumflex from the masculine plural past participle ( dus ) , thereby causing it to collide with the first and second person singular in the passΓ© simple , no doubt a source of endless confusion for the beleaguered francophone . the french make do with fewer forms for the multitude of regular verbs that are the staple diet of everyday speech . thus , manger ( to eat ) , has only 39 forms ( manger , mange , manges , . . mangeassent ) . models 1-5 must learn to connect the 5 forms of to eat to the 39 forms of manger . in the 28,850 , 104 french words that make up our training data , only 13 of the 39 forms of manger actually appear . of course , it is only natural that in the proceedings of a parliament , forms of manger are less numerous than forms of parler ( to speak ) , but even for parler , only 28 of the 39 forms occur in our data . if we were to encounter a rare form of one of these words , say , parlass ions or mangeassent , we would have no inkling of its relationship to speak or eat . a similar predicament besets nouns and adjectives as well . for example , composition is the among the most common words in our english vocabulary , but compositions is among the least common words . we plan to ameliorate these problems with a simple inflectional analysis of verbs , nouns , adjectives , and adverbs , so that the relatedness of the several forms of the same word is manifest in our representation of the data . for example , we wish to make evident the common pedigree of the different conjugations of a verb in french and in english ; of the singular and plural , and singular possessive and plural possessive forms of a noun in english ; of the singular , plural , masculine , and feminine forms of a noun or adjective in french ; and of the positive , comparative , and superlative forms of an adjective or adverb in english . thus , our intention is to transform ( je mange la peche i i eat the peach ) into , e.g. , ( je manger , 13spres la peche i i eat , x3spres the peach ) . here , eat is analyzed into a root , eat , and an ending , x3spres , that indicates the present tense form used except in the third person singular . similarly , mange is analyzed into a root , manger , and an ending , 13spres , that indicates the present tense form used for the first and third persons singular . these transformations are invertible and should reduce the french vocabulary by about 50 % and the english vocabulary by about 20 % . we hope that this will significantly improve the statistics in our models . that interesting bilingual lexical correlations can be extracted automatically from a large bilingual corpus was pointed out by brown et al . ( 1988 ) . the algorithm that they describe is , roughly speaking , equivalent to carrying out the first iteration of the em algorithm for our model 1 starting from an initial guess in which each french word is equally probable as a translation for each english word . they were unaware of a connection to the em algorithm , but they did realize that their method is not entirely satisfactory . for example , once it is clearly established that in ( la porte est rouge i the door is red ) , it is red that produces rouge , one is uncomfortable using this sentence as support for red producing porte or door producing rouge . they suggest removing words once a correlation between them has been clearly established and then reprocessing the resulting impoverished translations hoping to recover less obvious correlations now revealed by the departure of their more prominent relatives . from our present perspective , we see that the proper way to proceed is simply to carry out more iterations of the em algorithm . the likelihood for model 1 has a unique local maximum for any set of training data . as iterations proceed , the count for porte as a translation of red will dwindle away . in a later paper , brown et al . ( 1990 ) describe a model that is essentially the same as our model 3 . they sketch the em algorithm and show that , once trained , their model can be used to extract word-by-word alignments for pairs of sentences . they did not realize that the logarithm of the likelihood for model 1 is concave and , hence , has a unique local maximum . they were also unaware of the trick by which we are able to sum over all alignments when evaluating the counts for models 1 and 2 , and of the trick by which we are able to sum over all alignments when transferring parameters from model 2 to model 3 . as a result , they were unable to handle large vocabularies and so restricted themselves to vocabularies of only 9,000 words . nonetheless , they were able to align phrases in french with the english words that produce them as illustrated in their figure 3 . more recently , gale and church ( 1991a ) describe an algorithm similar to the one described in brown et al . ( 1988 ) . like brown et al. , they consider only the simultaneous appearance of words in pairs of sentences that are translations of one another . although algorithms like these are extremely simple , many of the correlations between english and french words are so pronounced as to fall prey to almost any effort to expose them . thus , the correlation of pairs like ( eau i water ) , ( lait i milk ) , ( pourquoi i why ) , ( chambre i house ) , and many others , simply can not be missed . they shout from the data , and any method that is not stone deaf will hear them . but many of the correlations speak in a softer voice : to hear them clearly , we must model the translation process , as brown et al . ( 1988 ) suggest and as brown et al . ( 1990 ) and the current paper actually do . only in this way can one hope to hear the quiet call of ( marquees d'un asterisque starred ) or the whisper of ( qui s'est fait bousculer i embattled ) . the series of models that we have described constitutes a mathematical embodiment of the powerfully compelling intuitive feeling that a word in one language can be translated into a word or phrase in another language . in some cases , there may be several or even several tens of translations depending on the context in which the word appears , but we should be quite surprised to find a word with hundreds of mutually exclusive translations . although we use these models as part of an automatic system for translating french into english , they provide , as a byproduct , very satisfying accounts of the word-by-word alignment of pairs of french and english strings . our work has been confined to french and english , but we believe that this is purely adventitious : had the early canadian trappers been manchurians later to be outnumbered by swarms of conquistadores , and had the two cultures clung stubbornly each to its native tongue , we should now be aligning spanish and chinese . we conjecture that local alignment of the component parts of any corpus of parallel texts is inherent in the corpus itself , provided only that it be large enough . between any pair of languages where mutual translation is important enough that the rate of accumulation of translated examples sufficiently exceeds the rate of mutation of the languages involved , there must eventually arise such a corpus . the linguistic content of our program thus far is scant indeed . it is limited to one set of rules for analyzing a string of characters into a string of words , and another set of rules for analyzing a string of words into a string of sentences . doubtless even these can be recast in terms of some information theoretic objective function . but it is not our intention to ignore linguistics , neither to replace it . rather , we hope to enfold it in the embrace of a secure probabilistic framework so that the two together may draw strength from one another and guide us to better natural language processing systems in general and to better machine translation systems in particular . we would like to thank many of our colleagues who read and commented on early versions of the manuscript , especially john lafferty . we would also like to thank the reviewers , who made a number of invaluable suggestions about the organization of the paper and pointed out many weaknesses in our original manuscript . if any weaknesses remain , it is not because of their failure to point them out , but because of our ineptness at responding adequately to their criticisms . english vocabulary english word english string random english string length of e random length of e position in e , i= 0 , 1 , ... ,1 word i of e the empty cept french vocabulary french word french string random french string length of f random length of f position in f , j= 1 , 2 , ... , m word j of f alignment cb length of ti position within a tablet , k =1 , 2 , ... , tik word k of ti ir a permutation of the positions of a tableau ik position in f for word k of ti for permutation 7r n- ( a ) neighboring alignments of a neighboring alignments of a with ij pegged b ( a ) alignment in jv ( a ) with greatest probability b°° ( a ) alignment obtained by applying b repeatedly to a bi_1 ( a ) alignment in .mi ( a ) with greatest probability bi & quot ; .Β° i ( a ) alignment obtained by applying bi-1 repeatedly to a a ( e ) class of english word e b ( f ) class of french word f a ] displacement of a word in f vacancies in f pt first position in e to the left of i that has non-zero fertility c , average position in f of the words connected to position i of e [ i ] position in e of the ith one word cept c [ i ] po translation model p with parameter values string length probabilities ( models 1 and 2 ) fertility probabilities ( models 3 , 4 , and 5 ) fertility probabilities for eo ( models 3 , 4 , and 5 ) alignment probabilities ( model 2 ) distortion probabilities ( model 3 ) distortion probabilities for the first word of a tablet ( model 4 ) distortion probabilities for the other words of a tablet ( model 4 ) distortion probabilities for the first word of a tablet ( model 5 ) distortion probabilities for the other words of a tablet ( model 5 ) we collect here brief descriptions of our various translation models and the formulae needed for training them . an english-to-french translation model p with parameters 9 is a formula for calculating a conditional probability , or likelihood , p0 ( f i e ) , for any string f of french words and any string e of english words . these probabilities satisfy where the sum ranges over all french strings f , and failure is a special symbol not in the french vocabulary . we interpret po ( f e ) as the probability that a translator will produce f when given e , and p0 ( failure i e ) as the probability that he will produce no translation when given e. we call a model deficient if p ( failure i e ) is greater than zero for some e. log-likelihood objective function . the log-likelihood of a sample of translations ( f ( s ) , e ( s ) ) , s = 1 , 2 , ... , s , is here c ( f , e ) is the empirical distribution of the sample , so that c ( f , e ) is 1/s times the number of times ( usually 0 or 1 ) that the pair ( f , e ) occurs in the sample . we determine values for the parameters 9 so as to maximize this log-likelihood for a large training sample of translations . for our models , the only alignments that have positive probability are those for which each word of f is connected to at most one word of e. relative objective function . we can compare hidden alignment models po and po using the relative objective function where p -0 ( a i f , e ) = ( a , f i e ) /p0 ( f i e ) . note that r ( p6 , po ) = 0 . r is related to by jensen 's inequality summing over e and f and using the definitions ( 51 ) and ( 54 ) we arrive at equation ( 55 ) . we can not create a good model or find good parameter values at a stroke . rather we employ a process of iterative improvement . for a given model we use current parameter values to find better ones , and in this way , from initial values we find locally optimal ones . then , given good parameter values for one model , we use them to find initial parameter values for another model . by alternating between these two steps we proceed through a sequence of gradually more sophisticated models . improving parameter values . from jensen 's inequality ( 55 ) , we see that 0 ( po ) is greater than 0 ( p0 ) if r ( p0- , po ) is positive . with p= p. this suggests the following between probability distributions p and q . however , whereas the relative entropy is never negative , r can take any value . the inequality ( 55 ) for r is the analog of the inequality d > 0 for d. iterative procedure , known as the em algorithm ( baum 1972 ; dempster , laird , and rubin 1977 ) , for finding locally optimal parameter values 0 for a model p : note that for any a , r ( po , po ) is non-negative at its maximum in 0 , since it is zero for = o . thus 0 ( p0 ) will not decrease from one iteration to the next . going from one model to another . jensen 's inequality also suggests a method for using parameter values 0 for one model i ' to find initial parameter values 0 for another model p : in contrast to the case where f ' = p , there may not be any 0 for which r ( p0 , po ) is non-negative . thus , it could be that , even for the best 0 , 11 ) ( 130 ) < 0 ( 1 ) 0 ) . parameter reestimation formulae . in order to apply these algorithms , we need to solve the maximization problems of steps 2 and 4 . for the models that we consider , we can do this explicitly . to exhibit the basic form of the solution , we suppose po is a model given by where the om , w e q , are real-valued parameters satisfying the constraints and for each w and ( a , f , e ) , c ( w ; a , f , e ) is a non-negative integer . ' we interpret 0 ( w ) as the probability of the event w and c ( w ; a , f , e ) as the number of times that this event occurs in ( a , f , e ) . note that the values for 0 that maximize the relative objective function r ( i50 , po ) subject to the contraints ( 59 ) are determined by the kuhn-tucker conditions where a is a lagrange multiplier , the value of which is determined by the equality constraint in equation ( 59 ) . these conditions are both necessary and sufficient for a maximum since r ( p6 , po ) is a concave function of the 0 ( w ) . by multiplying equation ( 61 ) by 0 ( w ) and using equation ( 60 ) and definition ( 54 ) of r , we obtain the parameter reestimation formulae we interpret e o ( w ; f , e ) as the expected number of times , as computed by the model po , that the event w occurs in the translation of e to f. thus 0 ( w ) is the ( normalized ) expected number of times , as computed by model po , that w occurs in the translations of the training sample . we can easily generalize these formulae to allow models of the form ( 58 ) for which the single equality constraint in equation ( 59 ) is replaced by multiple constraints where the subsets sti , p -- = 1 , 2 , ... form a partition of q . we need only modify equation e ( m i 1 ) string length probabilities t ( f i e ) translation probabilities here f e , f ; e e e or e = eo ; 1 =- ... ; and m = 1,2 , .... pe ( f , a i e ) = po ( m i e ) po ( a i m , e ) po ( f i a , m , e ) ( 67 ) assumptions . this model is not deficient . generation . equations ( 67 ) - ( 70 ) describe the following process for producing f from e : useful formulae . because of the independence assumptions ( 68 ) - ( 70 ) , we can calculate the sum over alignments ( 52 ) in closed form : equation ( 73 ) is useful in computations since it involves only 0 ( /m ) arithmetic operations , whereas the original sum over alignments ( 72 ) involves 0 ( /m ) operations . concavity . the objective function ( 51 ) for this model is a strictly concave function of the parameters . in fact , from equations ( 51 ) and ( 73 ) , which is clearly concave in e ( m i 1 ) and t ( f i e ) since the logarithm of a sum is concave , and the sum of concave functions is concave . because ' ; / ) is concave , it has a unique local maximum . moreover , we will find this maximum using the em algorithm , provided that none of our initial parameter values is zero . e ( m i 1 ) string length probabilities t ( f i e ) translation probabilities a ( i i j , l , m ) alignment probabilities here i= 0 , . ,1 ; and j = 1 , . , m. general formula . this model is not deficient . model 1 is the special case of this model in which the alignment probabilities are uniform : a ( i = ( i + 1 ) -1 for all i . generation . equations ( 75 ) - ( 78 ) describe the following process for producing f from e : useful formulae . just as for model 1 , the independence assumptions allow us to calculate the sum over alignments ( 52 ) in closed form : by assumption ( 77 ) the connections of a are independent given the length m of f. using equation ( 81 ) we find that they are also independent given f : where viterbi alignment . for this model , and thus also for model 1 , we can express in closed form the viterbi alignment v ( f i e ) between a pair of strings ( f , e ) : parameter reestimation formulae . we can find the parameter values 0 that maximize the relative objective function r ( po , po ) by applying the considerations of section b.2 . the counts c ( w ; a , f , e ) of equation ( 58 ) are we obtain the parameter reestimation formulae for t ( f i e ) and a ( i i j , l , m ) by using these counts in equations ( 62 ) β€” ( 66 ) . equation ( 64 ) requires a sum over alignments . if po satisfies as is the case for models 1 and 2 ( see equation ( 82 ) ) , then this sum can be calculated explicitly : equations ( 85 ) β€” ( 89 ) involve only 0 ( /m ) arithmetic operations , whereas the sum over alignments involves 0 ( /m ) operations . t ( f e ) translation probabilities n ( cb e ) fertility probabilities po , pi fertility probabilities for eo d ( jl id , m ) distortion probabilities here caβ€’ = 0,1,2 , β€’ β€’ . general formulae . where in equation ( 99 ) , the product runs over all ] = 1,2 , ... , m except those for which aβ€’ 0 . by summing over all pairs ( r , ir ) consistent with ( f , a ) we find the factors of 0 , ! in equation ( 101 ) arise because there are ft=o cbi ! equally probable terms in the sum ( 100 ) . parameter reestimation formulae . we can find the parameter values u that maximize the relative objective function r ( po , po ) by applying the considerations of section b.2 . the counts c ( o . ) ; a , f , e ) of equation ( 58 ) are we obtain the parameter reestimation formulae for t ( f i e ) , a ( j i i,1 , m ) , and t ( q5 i e ) by using these counts in equations ( 62 ) β€” ( 66 ) . equation ( 64 ) requires a sum over alignments . if pti satisfies ti ( a if , e ) =14 ( a ; i , e ) , ( 105 ) j=1 as is the case for models 1 and 2 ( see equation ( 82 ) ) , then this sum can be calculated explicitly for 'et- , ( f e ; f , e ) and eo ( ji i ; f , e ) : -ea ( f i e ; f , e ) m e-6 ( j i ; f , e ) eepo ( i i ] , f , e ) 5 ( e , ei ) 6 ( f jj ) , i=o 1=1 ijo ( i unfortunately , there is no analogous formula for eo ( 0 i e ; f , e ) . instead we must be content with in equation ( 108 ) , r denotes the set of all partitions of cb . recall that a partition of 0 is a decomposition of 0 as a sum of positive integers . for example , 0 = 5 has 7 partitions since 1 + 1 + 1 + 1 + 1 = 1 + 1 + 1 + 2 = 1 + 1 + 3 = 1 + 2 + 2 = 1 + 4 = 2 + 3 = 5 . for a partition y , we let -yk be the number of times that k appears in the sum , so that 0 = k-y . if -y is the partition corresponding to 1 + 1 + 3 , then = 2 , 'y3 = 1 , and -yk = 0 for k other than 1 or 3 . we adopt the convention that ro consists of the single element -y with -yk = 0 for all k. equation ( 108 ) allows us to compute the counts -4 ( 0 i e ; f , e ) in 0 ( //n + 0g ) operations , where g is the number of partitions of 0 . although g grows with 0 like ( 400 ) -1 exp 7a/20/3 [ 11 ] , it is manageably small for small 0 . for example , 0 = 10 has 42 partitions . proof of formula ( 108 ) . introduce the generating functional where x is an indeterminant . then to obtain equation ( 113 ) , rearrange the order of summation and sum over 0 to eliminate the 6-function of 0 . to obtain equation ( 114 ) , note that 0 , = em1 6 ( i , a ) and so i= x4 ) . ' flr x6 ( j'al ) . to obtain equation ( 115 ) , interchange the order of the sums on a ] /-=-1 and the product on j . to obtain equation ( 116 ) , note that in the sum on a , the only term for which the power of x is nonzero is the one for which a = i . now note that for any indeterminants x , yl , y2 , , where zk = ( -1 ) this follows from the calculation the reader will notice that the left-hand side of equation ( 120 ) involves only powers of x up to m , while equations ( 121 ) β€” ( 122 ) involve all powers of x . this is because the zk are not algebraically independent . in fact , for > m , the coefficient of x0 on the right-hand side of equation ( 122 ) must be zero . it follows that we can express z β€˜ k as a polynomial in zk , k =- 1,2 , β€’ β€’ β€’ , mβ€’ using equation ( 118 ) we can identify the coefficient of x4 ' in equation ( 117 ) . we obtain equation ( 108 ) by combining equations ( 117 ) , ( 118 ) , and the definitions ( 109 ) β€” ( 111 ) and ( 119 ) . b.6 model 4 translation probabilities parameters . fertility probabilities qt . e ) fertility probabilities for e0 n ( 0 e ) distortion probabilities for the first word of a tablet po , p1 distortion probabilities for the other words of a tablet cli ( 64 i a,13 ) d > i ( aj b ) here 64 is an integer ; a is an english class ; and b is a french class . where in equation ( 129 ) , p , is the first position to the left of i for which c/ ; β€’i > 0 , and cp is the ceiling of the average position of the words of rp : note that equations ( 125 ) , ( 126 ) , and ( 128 ) are identical to the corresponding formulae ( 93 ) , ( 94 ) , and ( 96 ) for model 3 . generation . equations ( 123 ) - ( 127 ) describe the following process for producing f or failure from e : 1-4 . choose a tableau t by following steps 1-4 for model 3 , 5 . for each i =- 1,2 , ... , l and each k = 1,2 , ... , oi choose a position & quot ; ffik as follows . if k 1 then choose 7r1 according to the distribution a ( e p , ) , b ( m ) ) . if k > 1 then choose 7rik greater than rk1 according to the distribution 6-8 . finish generating f by following steps 6-8 for model 3 . b.7 model 5 translation probabilities parameters . fertility probabilities t ( f i e ) fertility probabilities for e0 n ( 0 e ) distortion probabilities for the first word of a tablet po , pi distortion probabilities for the other words of a tablet di ( 64i b v ) d > 1 ( 64 i b , v ) here v = 1,2 , β€’ β€’ , m .
the mathematics of statistical machine translation : parameter estimation we describe a series of five statistical models of the translation process and give algorithms , for estimating the parameters of these models given a set of pairs of sentences that are translations of one another . we define a concept of word-by-word alignment between such pairs of sentences . for any given pair of such sentences each of our models assigns a probability to each of the possible word-by-word alignments . we give an algorithm for seeking the most probable of these alignments . although the algorithm is suboptimal , the alignment thus obtained accounts well for the word-by-word relationships in the pair of sentences . we have a great deal of data in french and english from the proceedings of the canadian parliament . accordingly , we have restricted our work to these two languages ; but we feel that because our algorithms have minimal linguistic content they would work well on other pairs of languages . we also feel again because of the minimal linguistic content of our algorithms , that it is reasonable to argue that word-by-word alignments are inherent in any sufficiently large bilingual corpus . our model for statistical machine translation ( smt ) is focused on word to word translation and was based on the noisy channel approach .
optimizing chinese word segmentation for machine translation performance previous work has shown that chinese word segmentation is useful for machine translation to english , yet the way different segmentation strategies affect mt is still poorly understood . in this paper , we demonstrate that optimizing segmentation for an existing segmentation standard does not always yield better mt performance . we find that other factors such as segmentation consistency and granularity of chinese β€œ words ” can be more important for machine translation . based on these findings , we implement methods inside a conditional random field segmenter that directly optimize segmentation granularity with respect to the mt task , providing an improvement of 0.73 bleu . we also show that improving segmentation consistency using external lexicon and proper noun features yields a 0.32 bleu increase . word segmentation is considered an important first step for chinese natural language processing tasks , because chinese words can be composed of multiple characters but with no space appearing between words . almost all tasks could be expected to benefit by treating the character sequence β€œ ❯it ” together , with the meaning smallpox , rather than dealing with the individual characters β€œ ❯ ” ( sky ) and β€œ it ” ( flower ) . without a standardized notion of a word , traditionally , the task of chinese word segmentation starts from designing a segmentation standard based on linguistic and task intuitions , and then aiming to building segmenters that output words that conform to the standard . one widely used standard is the penn chinese treebank ( ctb ) segmentation standard ( xue et al. , 2005 ) . it has been recognized that different nlp applications have different needs for segmentation . chinese information retrieval ( ir ) systems benefit from a segmentation that breaks compound words into shorter β€œ words ” ( peng et al. , 2002 ) , paralleling the ir gains from compound splitting in languages like german ( hollink et al. , 2004 ) , whereas automatic speech recognition ( asr ) systems prefer having longer words in the speech lexicon ( gao et al. , 2005 ) . however , despite a decade of very intense work on chinese to english machine translation ( mt ) , the way in which chinese word segmentation affects mt performance is very poorly understood . with current statistical phrase-based mt systems , one might hypothesize that segmenting into small chunks , including perhaps even working with individual characters would be optimal . this is because the role of a phrase table is to build domain and application appropriate larger chunks that are semantically coherent in the translation process . for example , even if the word for smallpox is treated as two one-character words , they can still appear in a phrase like β€œ ❯ itβ€” * smallpox ” , so that smallpox will still be a candidate translation when the system translates β€œ ❯ ” β€œ it ” . nevertheless , xu et al . ( 2004 ) show that an mt system with a word segmenter outperforms a system working with individual characters in an alignment template approach . on different language pairs , ( koehn and knight , 2003 ) and ( habash and sadat , 2006 ) showed that data-driven methods for splitting and preprocessing can improve arabic-english and german-english mt . beyond this , there has been no finer-grained analysis of what style and size of word segmentation is optimal for mt . moreover , most discussion of segmentation for other tasks relates to the size units to identify in the segmentation standard : whether to join or split noun compounds , for instance . people generally assume that improvements in a system ’ s word segmentation accuracy will be monotonically reflected in overall system performance . this is the assumption that justifies the concerted recent work on the independent task of chinese word segmentation evaluation at sighan and other venues . however , we show that this assumption is false : aspects of segmenters other than error rate are more critical to their performance when embedded in an mt system . unless these issues are attended to , simple baseline segmenters can be more effective inside an mt system than more complex machine learning based models , with much lower word segmentation error rate . in this paper , we show that even having a basic word segmenter helps mt performance , and we analyze why building an mt system over individual characters doesn ’ t function as well . based on an analysis of baseline mt results , we pin down four issues of word segmentation that can be improved to get better mt performance . ( i ) while a feature-based segmenter , like a support vector machine or conditional random field ( crf ) model , may have very good aggregate performance , inconsistent context-specific segmentation decisions can be quite harmful to mt system performance . ( ii ) a perceived strength of feature-based systems is that they can generate out-of-vocabulary ( oov ) words , but these can hurt mt performance , when they could have been split into subparts from which the meaning of the whole can be roughly compositionally derived . ( iii ) conversely , splitting oov words into noncompositional subparts can be very harmful to an mt system : it is better to produce such oov items than to split them into unrelated character sequences that are known to the system . one big source of such oov words is named entities . ( iv ) since the optimal granularity of words for phrase-based mt is unknown , we can benefit from a model which provides a knob for adjusting average word size . we build several different models to address these issues and to improve segmentation for the benefit of mt . first , we emphasize lexicon-based features in a feature-based sequence classifier to deal with segmentation inconsistency and over-generating oov words . having lexicon-based features reduced the mt training lexicon by 29.5 % , reduced the mt test data oov rate by 34.1 % , and led to a 0.38 bleu point gain on the test data ( mt05 ) . second , we extend the crf label set of our crf segmenter to identify proper nouns . this gives 3.3 % relative improvement on the oov recall rate , and a 0.32 improvement in bleu . finally , we tune the crf model to generate shorter or longer words to directly optimize the performance of mt . for mt , we found that it is preferred to have words slightly shorter than the ctb standard . the paper is organized as follows : we describe the experimental settings for the segmentation task and the task in section 2 . in section 3.1 we demonstrate that it is helpful to have word segmenters for mt , but that segmentation performance does not directly correlate with mt performance . we analyze what characteristics of word segmenters most affect mt performance in section 3.2 . in section 4 and 5 we describe how we tune a crf model to fit the β€œ word ” granularity and also how we incorporate external lexicon and information about named entities for better mt performance . for directly evaluating segmentation performance , we train each segmenter with the sighan bakeoff 2006 training data ( the upuc data set ) and then evaluate on the test data . the training data contains 509k words , and the test data has 155k words . the percentage of words in the test data that are unseen in the training data is 8.8 % . detail of the bakeoff data sets is in ( levow , 2006 ) . to understand how each segmenter learns about oov words , we will report the f measure , the in-vocabulary ( iv ) recall rate as well as oov recall rate of each segmenter . the mt system used in this paper is moses , a stateof-the-art phrase-based system ( koehn et al. , 2003 ) . we build phrase translations by first acquiring bidirectional giza++ ( och and ney , 2003 ) alignments , and using moses ’ grow-diag alignment symmetrization heuristic.1 we set the maximum phrase length to a large value ( 10 ) , because some segmenters described later in this paper will result in shorter words , therefore it is more comparable if we increase the maximum phrase length . during decoding , we incorporate the standard eight feature functions of moses as well as the lexicalized reordering model . we tuned the parameters of these features with minimum error rate training ( mert ) ( och , 2003 ) on the nist mt03 evaluation data set ( 919 sentences ) , and then test the mt performance on nist mt03 and mt05 evaluation data ( 878 and 1082 sentences , respectively ) . we report the mt performance using the original bleu metric ( papineni et al. , 2001 ) . all bleu scores in this paper are uncased . the mt training data was subsampled from gale year 2 training data using a collection of character 5-grams and smaller n-grams drawn from all segmentations of the test data . since the mt training data is subsampled with character n-grams , it is not biased towards any particular word segmentation . the mt training data contains 1,140,693 sentence pairs ; on the chinese side there are 60,573,223 non-whitespace characters , and the english sentences have 40,629,997 words . our main source for training our five-gram language model was the english gigaword corpus , and we also included close to one million english sentences taken from ldc parallel texts : gale year 1 training data ( excluding fouo data ) , sinorama , asianet , and hong kong news . we restricted the gigaword corpus to a subsample of 25 million sentences , because of memory constraints . in this section , we experiment with three types of segmenters – character-based , lexicon-based and feature-based – to explore what kind of characteristics are useful for segmentation for mt . the training data for the segmenter is two orders of magnitude smaller than for the mt system , it is not terribly well matched to it in terms of genre and variety , and the information an mt system learns about alignment of chinese to english might be the basis for a task appropriate segmentation style for chinese-english mt . a phrase-based mt system like moses can extract β€œ phrases ” ( sequences of tokens ) from a word alignment and the system can construct the words that are useful . these observations suggest the first hypothesis . observation in the experiments we conducted , we found that the phrase table can not capture everything a chinese word segmenter can do , and therefore having word segmentation helps phrase-based mt systems . 2 to show that having word segmentation helps mt , we compare a lexicon-based maximummatching segmenter with character-based segmentation ( treating each chinese character as a word ) . the lexicon-based segmenter finds words by greedily matching the longest words in the lexicon in a left-to-right fashion . we will later refer to this segmenter as maxmatch . the maxmatch segmenter is a simple and common baseline for the chinese word segmentation task . the segmentation performance of maxmatch is not very satisfying because it can not generalize to capture words it has never seen before . however , having a basic segmenter like maxmatch still gives the phrase-based mt system a win over the character-based segmentation ( treating each chinese character as a word ) . we will refer to the characterbased segmentation as charbased . in table 1 , we can see that on the chinese word segmentation task , having maxmatch is obviously better than not trying to identify chinese words at all ( charbased ) . as for mt performance , in table 1 we see that having a segmenter , even as sim2different phrase extraction heuristics might affect the results . in our experiments , grow-diag outperforms both one-tomany and many-to-one for both maxmatch and charbased . we report the results only on grow-diag . ple as maxmatch , can help phrase-based mt system by about 1.37 bleu points on all 1082 sentences of the test data ( mt05 ) . also , we tested the performance on 828 sentences of mt05 where all elements are in vocabulary3 for both maxmatch and charbased . maxmatch achieved 32.09 bleu and charbased achieved 30.28 bleu , which shows that on the sentences where all elements are in vocabulary , there maxmatch is still significantly better than charbased . therefore , hypothesis 1 is refuted . analysis we hypothesized in hypothesis 1 that the phrase table in a phrase-based mt system should be able to capture the meaning by building β€œ phrases ” on top of character sequences . based on the experimental result in table 1 , we see that using characterbased segmentation ( charbased ) actually performs reasonably well , which indicates that the phrase table does capture the meaning of character sequences to a certain extent . however , the results also show that there is still some benefit in having word segmentation for mt . we analyzed the decoded output of both systems ( charbased and maxmatch ) on the development set ( mt03 ) . we found that the advantage of maxmatch over charbased is two-fold , ( i ) lexical : it enhances the ability to disambiguate the case when a character has very different meaning in different contexts , and ( ii ) reordering : it is easier to move one unit around than having to move two consecutive units at the same time . having words as the basic units helps the reordering model . for the first advantage , one example is the character β€œ ➐ ” , which can both mean β€œ intelligence ” , or an abbreviation for chile ( βžβ‘€ ) . the comparison between charbased and maxmatch is listed in table 2 . the word βžˆβžβœ‡ ( dementia ) is unknown for both segmenters . however , maxmatch gave a better translation of the character ➐ . the issue here is not that the β€œ ➐ ” β€” * β€œ intelligence ” entry never appears in the phrase table of charbased . the real issue is , when ➐ means chile , it is usually followed by the character β‘€ . so by grouping them together , maxmatch avoided falsely increasing the probability of translating the stand-alone ➐ into chile . based on our analysis , this ambiguity occurs the most when the character-based system is dealing with a rare or unseen character sequence in the training data , and also occurs more often when dealing with transliterations . the reason is that characters composing a transliterated foreign named entity usually doesn ’ t preserve their meanings ; they are just used to compose a chinese word that sounds similar to the original word – much more like using a character segmentation of english words . another example of this kind is β€œ ❈✎❴➦βœͺβž»βœ‡ ” ( alzheimer ’ s disease ) . the mt system using charbased segmentation tends to translate some characters individually and drop others ; while the system using maxmatch segmentation is more likely to translate it right . the second advantage of having a segmenter like the lexicon-based maxmatch is that it helps the reordering model . results in table 1 are with the linear distortion limit defaulted to 6 . since words in charbased are inherently shorter than maxmatch , having the same distortion limit means charbased is limited to a smaller context than maxmatch . to make a fairer comparison , we set the linear distortion limit in moses to unlimited , removed the lexicalized reordering model , and retested both systems . with this setting , maxmatch is 0.46 bleu point better than charbased ( 29.62 to 29.16 ) on mt03 . this result suggests that having word segmentation does affect how the reordering model works in a phrasebased system . hypothesis 2 . better segmentation performance should lead to better mt performance observation we have shown in hypothesis 1 that it is helpful to segment chinese texts into words first . in order to decide a segmenter to use , the most intuitive thing to do is to find one that gives higher f measure on segmentation . our experiments show that higher f measure does not necessarily lead to higher bleu score . in order to contrast with the simple maximum matching lexicon-based model ( maxmatch ) , we built another segmenter with a crf model . crf is a statistical sequence modeling framework introduced by lafferty et al . ( 2001 ) , and was first used for the chinese word segmentation task by peng et al . ( 2004 ) , who treated word segmentation as a binary decision task . we optimized the parameters with a quasi-newton method , and used gaussian priors to prevent overfitting . the probability assigned to a label sequence for a particular sequence of characters by a crf is given by the equation : x is a sequence of t unsegmented characters , z ( x ) is the partition function that ensures that equation 1 is a probability distribution , { fk } kk=1 is a set of feature functions , and y is the sequence of binary predictions for the sentence , where the prediction yt = +1 indicates the t-th character of the sequence is preceded by a space , and where yt = βˆ’1 indicates there is none . we trained a crf model with a set of basic features : character identity features of the current character , previous character and next character , and the conjunction of previous and current characters in the zero-order templates . we will refer to this segmenter as crf-basic . table 3 shows that the feature-based segmenter crf-basic outperforms the lexicon-based maxmatch by 5.9 % relative f measure . comparing the oov recall rate and the iv recall rate , the reason is that crfbasic wins a lot on the oov recall rate . we see that a feature-based segmenter like crf-basic clearly has stronger ability to recognize unseen words . on mt performance , however , crf-basic is 0.38 bleu points worse than maxmatch on the test set . in section 3.2 , we will look at how the mt training and test data are segmented by each segmenter , and provide statistics and analysis for why certain segmenters are better than others . in section 3.1 we have refuted two hypotheses . now we know that : ( i ) phrase table construction does not fully capture what a word segmenter can do . thus it is useful to have word segmentation for mt . ( ii ) a higher f measure segmenter does not necessarily outperforms on the mt task . to understand what factors other than segmentation f measure can affect mt performance , we introduce another crf segmenter crf-lex that includes lexicon-based features by using external lexicons . more details of crf-lex will be described in section 5.1 . from table 3 , we see that the segmentation f measure is that crf-lex > crf-basic > maxmatch . and now we know that the better segmentation f measure does not always lead to better mt bleu score , because of in terms of mt performance , crf-lex > maxmatch > crf-basic . in table 4 , we list some statistics of each segmenter to explain this phenomenon . first we look at the lexicon size of the mt training and test data . while segmenting the mt data , crf-basic generates an mt training lexicon size of 583k unique word tokens , and maxmatch has a much smaller lexicon size of 39k . crf-lex performs best on mt , but the mt training lexicon size and test lexicon oov rate is still pretty high compared to maxmatch . only examining the mt training and test lexicon size still doesn ’ t fully explain why crf-lex outperforms maxmatch . maxmatch generates a smaller mt lexicon and lower oov rate , but for mt it wasn ’ t better than crf-lex , which has a bigger lexicon and higher oov rate . in order to understand why maxmatch performs worse on mt than crf-lex but better than crf-basic , we use conditional entropy of segmentation variations to measure consistency . we use the gold segmentation of the sighan test data as a guideline . for every work type wi , we collect all the different pattern variations vij in the segmentation we want to examine . for example , for a word β€œ abc ” in the gold segmentation , we look at how it is segmented with a segmenter . there are many possibilities . if we use cx and cy to indicate other chinese characters and to indicate white spaces , β€œ cx abc cy ” is the correct segmentation , because the three characters are properly segmented from both sides , and they are concatenated with each other . it can also be segmented as β€œ cx a bc cy ” , which means although the boundary is correct , the first character is separated from the other two . or , it can be segmented as β€œ cxa bccy ” , which means the first character was actually part of the previous word , while bc are the beginning of the next word . every time a particular word type wi appears in the text , we consider a segmenter more consistent if it can segment wi in the same way every time , but it doesn ’ t necessarily have to be the same as the gold standard segmentation . for example , if β€œ abc ” is a chinese person name which appears 100 times in the gold standard data , and one segmenter segment it as cx a bc cy 100 times , then this segmenter is still considered to be very consistent , even if it doesn ’ t exactly match the gold standard segmentation . using this intuition , the conditional entropy of segmentation variations h ( v|w ) is defined as follows : now we can look at the overall conditional entropy h ( v|w ) to compare the consistency of each segmenter . in table 4 , we can see that even though maxmatch has a much smaller mt lexicon size than crf-lex , when we examine the consistency of how maxmatch segments in context , we find the conditional entropy is much higher than crf-lex . we can also see that crf-basic has a higher conditional entropy than the other two . the conditional entropy h ( v|w ) shows how consistent each segmenter is , and it correlates with the mt performance in table 4 . note that consistency is only one of the competing factors of how good a segmentation is for mt performance . for example , a character-based segmentation will always have the best consistency possible , since every word abc will just have one pattern : cx a b c cy . but from section 3.1 we see that charbased performs worse than both maxmatch and crf-basic on mt , because having word segmentation can help the granularity of the chinese lexicon match that of the english lexicon . in conclusion , for mt performance , it is helpful to have consistent segmentation , while still having a word segmentation matching the granularity of the segmented chinese lexicon and the english lexicon . we have shown earlier that word-level segmentation vastly outperforms character based segmentation in mt evaluations . since the word segmentation standard under consideration ( chinese treebank ( xue et al. , 2005 ) ) was neither specifically designed nor optimized for mt , it seems reasonable to investigate whether any segmentation granularity in continuum between character-level and ctb-style segmentation is more effective for mt . in this section , we present a technique for directly optimizing a segmentation propertyβ€”characters per token averageβ€” for translation quality , which yields significant improvements in mt performance . in order to calibrate the average word length produced by our crf segmenterβ€”i.e. , to adjust the rate of word boundary predictions ( yt = +1 ) , we apply a relatively simple technique ( minkov et al. , 2006 ) originally devised for adjusting the precision/recall tradeoff of any sequential classifier . specifically , the weight vector w and feature vector of a trained linear sequence classifier are augmented at test time to include new class-conditional feature functions to bias the classifier towards particular class labels . in our case , since we wish to increase the frequency of word boundaries , we add a feature function : its weight 1,0 controls the extent of which the classifier will make positive predictions , with very large positive 4 values causing only positive predictions ( i.e. , character-based segmentation ) and large negative values effectively disabling segmentation boundaries . table 5 displays how changes of the bias parameter Ξ»0 affect segmentation granularity.4 since we are interested in analyzing the different regimes of mt performance between ctb segmentation and character-based , we performed a grid search in the range between Ξ»0 = 0 ( maximumlikelihood estimate ) and Ξ»0 = 32 ( a value that is large enough to produce only positive predictions ) . for each Ξ»0 value , we ran an entire mt training and testing cycle , i.e. , we re-segmented the entire training data , ran giza++ , acquired phrasal translations that abide to this new segmentation , and ran mert and evaluations on segmented data using the same 4note that character-per-token averages provided in the table consider each non-chinese word ( e.g. , foreign names , numbers ) as one character , since our segmentation post-processing prevents these tokens from being segmented . tive bias values ( Ξ»0 = βˆ’2 ) slightly improves segmentation performance . we also notice that raising Ξ»0 yields relatively consistent improvements in mt performance , yet causes segmentation performance ( f measure ) to be increasingly worse . while the latter finding is not particularly surprising , it further confirms that segmentation and mt evaluations can yield rather different outcomes . we chose the Ξ»0 = 2 on another dev set ( mt02 ) . on the test set mt05 , Ξ»0 = 2 yields 31.47 bleu , which represents a quite large improvement compared to the unbiased segmenter ( 30.95 bleu ) . further reducing the average number of characters per token yields gradual drops of performance until character-level segmentation ( Ξ»0 > 32 , 29.36 bleu ) . here are some examples of how setting Ξ»0 = 2 shortens the words in a way that can help mt . in section 3.1 we showed that a statistical sequence model with rich features can generalize better than maximum matching segmenters . however , it also inconsistently over-generates a big mt training lexicon and oov words in mt test data , and thus causes a problem for mt . to improve a feature-based sequence model for mt , we propose 4 different approaches to deal with named entities , optimal length of word for mt and joint search for segmentation and mt decoding . one way to improve the consistency of the crf model is to make use of external lexicons ( which are not part of the segmentation training data ) to add lexicon-based features . all the features we use are listed in table 6 . our linguistic features are adopted from ( ng and low , 2004 ) and ( tseng et al. , 2005 ) . there are three categories of features : character identity n-grams , morphological and character reduplication features . our lexicon-based features are adopted from ( shi and wang , 2007 ) , where lbegin ( c0 ) , lmid ( c0 ) and lend ( c0 ) represent the maximum length of words found in a lexicon that contain the current character as either the first , middle or last character , and we group any length equal or longer than 6 together . the linguistic features help capturing words that were unseen to the segmenter ; while the lexicon-based features constrain the segmenter with external knowledge of what sequences are likely to be words . we built a crf segmenter with all the features listed in table 6 ( crf-lex ) . the external lexicons we used for the lexicon-based features come from various sources including named entities collected from wikipedia and the chinese section of the un website , named entities collected by harbin institute of technology , the adso dictionary , emm news explorer , online chinese tools , online dictionary from peking university and hownet . there are 423,224 distinct entries in all the external lexicons . the mt lexicon consistency of crf-lex in table 4 shows that the mt training lexicon size has been reduced by 29.5 % and the mt test data oov rate is reduced by 34.1 % . named entities are an important source for oov words , and in particular are ones which it is bad to break into pieces ( particularly for foreign names ) . therefore , we use the proper noun ( nr ) part-ofspeech tag information from ctb to extend the label sets of our crf model from 2 to 4 ( { beginning of a word , continuation of a word } x { nr , not nr } ) . this is similar to the β€œ all-at-once , character-based ” pos tagging in ( ng and low , 2004 ) , except that we are only tagging proper nouns . we call the 4label extension crf-lex-nr . the segmentation and mt performance of crf-lex-nr is listed in table 7 . with the 4-label extension , the oov recall rate improved by 3.29 % ; while the iv recall rate stays the same . similar to ( ng and low , 2004 ) , we found the overall f measure only goes up a tiny bit , but we do find a significant oov recall rate improvement . on the mt performance , crf-lex-nr has a 0.32 bleu gain on the test set mt05 . in addition to the bleu improvement , crf-lex-nr also provides extra information about proper nouns , which can be combined with postprocessing named entity translation modules to further improve mt performance . in this paper , we investigated what segmentation properties can improve machine translation performance . first , we found that neither character-based nor a standard word segmentation standard are optimal for mt , and show that an intermediate granularity is much more effective . using an already competitive crf segmentation model , we directly optimize segmentation granularity for translation quality , and obtain an improvement of 0.73 bleu point on mt05 over our lexicon-based segmentation baseline . second , we augment our crf model with lexicon and proper noun features in order to improve segmentation consistency , which provide a 0.32 bleu point improvement . the authors would like to thank menqgiu wang and huihsin tseng for useful discussions . this paper is based on work funded in part by the defense advanced research projects agency through ibm .
optimizing chinese word segmentation for machine translation performance previous work has shown that chinese word segmentation is useful for machine translation to english , yet the way different segmentation strategies affect mt is still poorly understood . in this paper , we demonstrate that optimizing segmentation for an existing segmentation standard does not always yield better mt performance . we find that other factors such as segmentation consistency and granularity of chinese β€œ words ” can be more important for machine translation . based on these findings , we implement methods inside a conditional random field segmenter that directly optimize segmentation granularity with respect to the mt task , providing an improvement of 0.73 bleu . we also show that improving segmentation consistency using external lexicon and proper noun features yields a 0.32 bleu increase . we develop the crf-based stanford chinese segmenter that is trained on the segmentation of the chinese treebank for consistency . we enhance a crf s segmentation model in mt tasks by tuning the word granularity and improving the segmentation consistence .
the viability of web-derived polarity lexicons we examine the viability of building large polarity lexicons semi-automatically from the web . we begin by describing a graph propagation framework inspired by previous work on constructing polarity lexicons from lexical graphs ( kim and hovy , 2004 ; hu and liu , 2004 ; esuli and sabastiani , 2009 ; blair- goldensohn et al. , 2008 ; rao and ravichandran , 2009 ) . we then apply this technique to build an english lexicon that is significantly larger than those previously studied . crucially , this web-derived lexicon does not require wordnet , part-of-speech taggers , or other language-dependent resources typical of sentiment analysis systems . as a result , the lexicon is not limited to specific word classes – e.g. , adjectives that occur in wordnet – and in fact contains slang , misspellings , multiword expressions , etc . we evaluate a lexicon derived from english documents , both qualitatively and quantitatively , and show that it provides superior performance to previously studied lexicons , including one derived from polarity lexicons are large lists of phrases that encode the polarity of each phrase within it – either positive or negative – often with some score representing the magnitude of the polarity ( hatzivassiloglou and mckeown , 1997 ; wiebe , 2000 ; turney , 2002 ) . though classifiers built with machine learning algorithms have become commonplace in the sentiment analysis literature , e.g. , pang et al . ( 2002 ) , the core of many academic and commercial sentiment analysis systems remains the polarity lexicon , which can be constructed manually ( das and chen , 2007 ) , through heuristics ( kim and hovy , 2004 ; esuli and sabastiani , 2009 ) or using machine learning ( turney , 2002 ; rao and ravichandran , 2009 ) . often lexicons are combined with machine learning for improved results ( wilson et al. , 2005 ) . the pervasiveness and sustained use of lexicons can be ascribed to a number of reasons , including their interpretability in large-scale systems as well as the granularity of their analysis . in this work we investigate the viability of polarity lexicons that are derived solely from unlabeled web documents . we propose a method based on graph propagation algorithms inspired by previous work on constructing polarity lexicons from lexical graphs ( kim and hovy , 2004 ; hu and liu , 2004 ; esuli and sabastiani , 2009 ; blair-goldensohn et al. , 2008 ; rao and ravichandran , 2009 ) . whereas past efforts have used linguistic resources – e.g. , wordnet – to construct the lexical graph over which propagation runs , our lexicons are constructed using a graph built from co-occurrence statistics from the entire web . thus , the method we investigate can be seen as a combination of methods for propagating sentiment across lexical graphs and methods for building sentiment lexicons based on distributional characteristics of phrases in raw data ( turney , 2002 ) . the advantage of breaking the dependence on wordnet ( or related resources like thesauri ( mohammad et al. , 2009 ) ) is that it allows the lexicons to include non-standard entries , most notably spelling mistakes and variations , slang , and multiword expressions . the primary goal of our study is to understand the characteristics and practical usefulness of such a lexicon . towards this end , we provide both a qualitative and quantitative analysis for a web-derived english lexicon relative to two previously published lexicons – the lexicon used in wilson et al . ( 2005 ) and the lexicon used in blair-goldensohn et al . ( 2008 ) . our experiments show that a web-derived lexicon is not only significantly larger , but has improved accuracy on a sentence polarity classification task , which is an important problem in many sentiment analysis applications , including sentiment aggregation and summarization ( hu and liu , 2004 ; carenini et al. , 2006 ; lerman et al. , 2009 ) . these results hold true both when the lexicons are used in conjunction with string matching to classify sentences , and when they are included within a contextual classifier framework ( wilson et al. , 2005 ) . extracting polarity lexicons from the web has been investigated previously by kaji and kitsuregawa ( 2007 ) , who study the problem exclusively for japanese . in that work a set of positive/negative sentences are first extracted from the web using cues from a syntactic parser as well as the document structure . adjectives phrases are then extracted from these sentences based on different statistics of their occurrence in the positive or negative set . our work , on the other hand , does not rely on syntactic parsers or restrict the set of candidate lexicon entries to specific syntactic classes , i.e. , adjective phrases . as a result , the lexicon built in our study is on a different scale than that examined in kaji and kitsuregawa ( 2007 ) . though this hypothesis is not tested here , it also makes our techniques more amenable to adaptation for other languages . in this section we describe a method to construct polarity lexicons using graph propagation over a phrase similarity graph constructed from the web . we construct our lexicon using graph propagation techniques , which have previously been investigated in the construction of polarity lexicons ( kim and hovy , 2004 ; hu and liu , 2004 ; esuli and sabastiani , 2009 ; blair-goldensohn et al. , 2008 ; rao and ravichandran , 2009 ) . we assume as input an undirected edge weighted graph g = ( v , e ) , where wij e [ 0 , 1 ] is the weight of edge ( vi , vj ) e e. the node set v is the set of candidate phrases for inclusion in a sentiment lexicon . in practice , g should encode semantic similarities between two nodes , e.g. , for sentiment analysis one would hope that wij > wik if vi=good , vj=great and vk=bad . we also assume as input two sets of seed phrases , denoted p for the positive seed set and n for the negative seed set . the common property among all graph propagation algorithms is that they attempt to propagate information from the seed sets to the rest of the graph through its edges . this can be done using machine learning , graph algorithms or more heuristic means . the specific algorithm used in this study is given in figure 1 , which is distinct from common graph propagation algorithms , e.g. , label propagation ( see section 2.3 ) . the output is a polarity vector pol e rive such that poli is the polarity score for the ith candidate phrase ( or the ith node in g ) . in particular , we desire pol to have the following semantics : { > 0 ith phrase has positive polarity οΏ½ 0 ith phrase has negative polarity = 0 ith phrase has no sentiment intuitively , the algorithm works by computing both a positive and a negative polarity magnitude for each node in the graph , call them pol+i and pol-i . these values are equal to the sum over the max weighted path from every seed word ( either positive or negative ) to node vi . phrases that are connected to multiple positive seed words through short yet highly weighted paths will receive high positive values . the final polarity of a phrase is then set to poli = pol+i βˆ’ qpol-i , where q a constant meant to account for the difference in overall mass of positive and negative flow in the graph . thus , after the algorithm is run , if a phrase has a higher positive than negative polarity score , then its final polarity will be positive , and negative otherwise . there are some implementation details worth pointing out . first , the algorithm in figure 1 is written in an iterative framework , where on each iteration , paths of increasing lengths are considered . the input variable t controls the max path length considered by the algorithm . this can be set to be a small value in practice , since the multiplicative path weights result in long paths rarely contributing to polarity scores . second , the parameter -y is a threshold that defines the minimum polarity magnitude a initialize : poli , pol+i , pol-i = 0 , for all i pol+i = 1.0 for all vi e p and pol-i = 1.0 for all vi e n phrase must have to be included in the lexicon . both t and y were tuned on held-out data . to construct the final lexicon , the remaining nodes – those with polarity scores above y – are extracted and assigned their corresponding polarity . graph propagation algorithms rely on the existence of graphs that encode meaningful relationships between candidate nodes . past studies on building polarity lexicons have used linguistic resources like wordnet to define the graph through synonym and antonym relations ( kim and hovy , 2004 ; esuli and sabastiani , 2009 ; blair-goldensohn et al. , 2008 ; rao and ravichandran , 2009 ) . the goal of this study is to examine the size and quality of polarity lexicons when the graph is induced automatically from documents on the web . constructing a graph from web-computed lexical co-occurrence statistics is a difficult challenge in and of itself and the research and implementation hurdles that arise are beyond the scope of this work ( alfonseca et al. , 2009 ; pantel et al. , 2009 ) . for this study , we used an english graph where the node set v was based on all n-grams up to length 10 extracted from 4 billion web pages . this list was filtered to 20 million candidate phrases using a number of heuristics including frequency and mutual information of word boundaries . a context vector for each candidate phrase was then constructed based on a window of size six aggregated over all mentions of the phrase in the 4 billion documents . the edge set e was constructed by first , for each potential edge ( vi , vj ) , computing the cosine similarity value between context vectors . all edges ( vi , vj ) were then discarded if they were not one of the 25 highest weighted edges adjacent to either node vi or vj . this serves to both reduce the size of the graph and to eliminate many spurious edges for frequently occurring phrases , while still keeping the graph relatively connected . the weight of the remaining edges was set to the corresponding cosine similarity value . since this graph encodes co-occurrences over a large , but local context window , it can be noisy for our purposes . in particular , we might see a number of edges between positive and negative sentiment words as well as sentiment words and non-sentiment words , e.g. , sentiment adjectives and all other adjectives that are distributionally similar . larger windows theoretically alleviate this problem as they encode semantic as opposed to syntactic similarities . we note , however , that the graph propagation algorithm described above calculates the sentiment of each phrase as the aggregate of all the best paths to seed words . thus , even if some local edges are erroneous in the graph , one hopes that , globally , positive phrases will be influenced more by paths from positive seed words as opposed to negative seed words . section 3 , and indeed this paper , aims to measure whether this is true or not . previous studies on constructing polarity lexicons from lexical graphs , e.g. , rao and ravichandran ( 2009 ) , have used the label propagation algorithm , which takes the form in figure 2 ( zhu and ghahramani , 2002 ) . label propagation is an iterative algorithm where each node takes on the weighted average of its neighbour ’ s values from the previous iteration . the result is that nodes with many paths to seeds get high polarities due to the influence from their neighbours . the label propagation algorithm is known to have many desirable properties including convergence , a well defined objective function input : g = ( v , e ) , wig ∈ [ 0 , 1 ] , p , n output : pol ∈ r|v | initialize : poli = 1.0 for all vi ∈ p and poli = βˆ’1.0 for all vi ∈ n and poli = 0.0 βˆ€vi ∈� p βˆͺ n ( minimize squared error between values of adjacent nodes ) , and an equivalence to computing random walks through graphs . the primary difference between standard label propagation and the graph propagation algorithm given in section 2.1 , is that a node with multiple paths to a seed will be influenced by all these paths in the label propagation algorithm , whereas only the single path from a seed will influence the polarity of a node in our proposed propagation algorithm – namely the path with highest weight . the intuition behind label propagation seems justified . that is , if a node has multiple paths to a seed , it should be reflected in a higher score . this is certainly true when the graph is of high quality and all paths trustworthy . however , in a graph constructed from web cooccurrence statistics , this is rarely the case . our graph consisted of many dense subgraphs , each representing some semantic entity class , such as actors , authors , tech companies , etc . problems arose when polarity flowed into these dense subgraphs with the label propagation algorithm . ultimately , this flow would amplify since the dense subgraph provided exponentially many paths from each node to the source of the flow , which caused a reinforcement effect . as a result , the lexicon would consist of large groups of actor names , companies , etc . this also led to convergence issues since the polarity is divided proportional to the size of the dense subgraph . additionally , negative phrases in the graph appeared to be in more densely connected regions , which resulted in the final lexicons being highly skewed towards negative entries due to the influence of multiple paths to seed words . for best path propagation , these problems were less acute as each node in the dense subgraph would only get the polarity a single time from each seed , which is decayed by the fact that edge weights are smaller than 1 . furthermore , the fact that edge weights are less than 1 results in most long paths having weights near zero , which in turn results in fast convergence . we ran the best path graph propagation algorithm over a graph constructed from the web using manually constructed positive and negative seed sets of 187 and 192 words in size , respectively . these words were generated by a set of five humans and many are morphological variants of the same root , e.g. , excel/excels/excelled . the algorithm produced a lexicon that contained 178,104 entries . depending on the threshold -y ( see figure 1 ) , this lexicon could be larger or smaller . as stated earlier , our selection of -y and all hyperparameters was based on manual inspection of the resulting lexicons and performance on held-out data . in the rest of this section we investigate the properties of this lexicon to understand both its general characteristics as well as its possible utility in sentiment applications . to this end we compare three different lexicons : table 1 breaks down the lexicon by the number of positive and negative entries of each lexicon , which clearly shows that the lexicon derived from the web is more than an order of magnitude larger than previously constructed lexicons.2 this in and of itself is not much of an achievement if the additional phrases are of poor quality . however , in section 3.2 we present an empirical evaluation that suggests that these terms provide both additional and useful information . table 1 also shows the recall of the each lexicon relative to the other . whereas the wilson et al . ( 2005 ) and wordnet lexicon have a recall of only 3 % relative to the web lexicon , the web lexicon has a recall of 48 % and 70 % relative to the two other lexicons , indicating that it contains a significant amount of information from the other lexicons . however , this overlap is still small , suggesting that a combination of all the lexicons could provide the best performance . in section 3.2 we investigate this empirically through a meta classification system . table 2 shows the distribution of phrases in the web-derived lexicon relative to the number of tokens in each phrase . here a token is simply defined by whitespace and punctuation , with punctuation counting as a token , e.g. , β€œ half-baked ” is counted as 3 tokens . for the most part , we see what one might expect , as the number of tokens increases , the number of corresponding phrases in the lexicon also decreases . longer phrases are less frequent and thus will have both fewer and lower weighted edges to adjacent nodes in the graph . there is a single phrase of length 9 , which is β€œ motion to dismiss for failure to state a claim ” . in fact , the lexicon contains quite a number of legal and medical phrases . this should not be surprising , since in a graph induced from the web , a phrase like β€œ cancer ” ( or any disease ) should be distributionally similar to phrases like β€œ illness ” , β€œ sick ” , and β€œ death ” , which themselves will be similar to standard sentiment phrases like β€œ bad ” and β€œ terrible ” . these terms are predominantly negative in the lexicon representing the broad notion that legal and medical events are undesirable . perhaps the most interesting characteristic of the lexicon is that the most frequent phrase length is 2 and not 1 . the primary reason for this is an abundance of adjective phrases consisting of an adverb and an adjective , such as β€œ more brittle ” and β€œ less brittle ” . almost every adjective of length 1 is frequently combined in such a way on the web , so it not surprising that we see many of these phrases in the lexicon . ideally we would see an order on such phrases , e.g. , β€œ more brittle ” has a larger negative polarity than β€œ brittle ” , which in turn has a larger negative polarity than β€œ less brittle ” . however , this is rarely the case and usually the adjective has the highest polarity magnitude . again , this is easily explained . these phrases are necessarily more common and will thus have more edges with larger weights in the graph and thus a greater chance of accumulating a high sentiment score . the prominence of such phrases suggests that a more principled treatment of them should be investigated in the future . finally , table 3 presents a selection of phrases from both the positive and negative lexicons categorized into revealing verticals . for both positive and negative phrases we present typical examples of phrases – usually adjectives – that one would expect to be in a sentiment lexicon . these are phrases not included in the seed sets . we also present multiword phrases for both positive and negative cases , which displays concretely the advantage of building lexicons from the web as opposed to using restricted linguistic resources such as wordnet . finally , we show two special cases . the first is spelling variations ( and mistakes ) for positive phrases , which were far more prominent than for negative phrases . many of these correspond to social media text where one expresses an increased level of sentiment by repeating characters . the second is vulgarity in negative phrases , which was far more prominent than for positive phrases . some of these are clearly appropriplucky just what the doctor ordered cooool sucky flash in the pan shitty ravishing out of this world coooool subpar bumps in the road half assed spunky top of the line koool horrendous foaming at the mouth jackass enchanting melt in your mouth kewl miserable dime a dozen piece of shit precious snug as a bug cozy lousy pie - in - the - sky son of a bitch charming out of the box cosy abysmal sick to my stomach sonofabitch stupendous more good than bad sikk wretched pain in my ass sonuvabitch ate , e.g. , β€œ shitty ” , but some are clearly insults and outbursts that are most likely included due to their co-occurrence with angry texts . there were also a number of derogatory terms and racial slurs in the lexicon , again most of which received negative sentiment due to their typical disparaging usage . to determine the practical usefulness of a polarity lexicon derived from the web , we measured the performance of the lexicon on a sentence classification/ranking task . the input is a set of sentences and the output is a classification of the sentences as being either positive , negative or neutral in sentiment . additionally , the system outputs two rankings , the first a ranking of the sentence by positive polarity and the second a ranking of the sentence by negative polarity . classifying sentences by their sentiment is a subtask of sentiment aggregation systems ( hu and liu , 2004 ; gamon et al. , 2005 ) . ranking sentences by their polarity is a critical sub-task in extractive sentiment summarization ( carenini et al. , 2006 ; lerman et al. , 2009 ) . to classify sentences as being positive , negative or neutral , we used an augmented vote-flip algorithm ( choi and cardie , 2009 ) , which is given in figure 3 . this intuition behind this algorithm is simple . the number of matched positive and negative phrases from the lexicon are counted and whichever has the most votes wins . the algorithm flips the decision if the number of negations is odd . though this algorithm appears crude , it benefits from not relying on threshold values for neutral classification , which is difficult due to the fact that the polarity scores in the three lexicons are not on the same scale . to rank sentences we defined the purity of a sentence x as the normalized sum of the sentiment scores for each phrase x in the sentence : this is a normalized score in the range [ βˆ’1 , 1 ] . intuitively , sentences with many terms of the same polarity will have purity scores at the extreme points of the range . before calculating purity , a simple negation heuristic was implemented that reversed the sentiment scores of terms that were within the scope of negations . the term 6 helps to favor sentences with multiple phrase matches . purity is a common metric used for ranking sentences for inclusion in sentiment summaries ( lerman et al. , 2009 ) . purity and negative purity were used to rank sentences as being positive and negative sentiment , respectively . the data used in our initial english-only experiments were a set of 554 consumer reviews described in ( mcdonald et al. , 2007 ) . each review was sentence split and annotated by a human as being positive , negative or neutral in sentiment . this resulted in 3,916 sentences , with 1,525 , 1,542 and 849 positive , negative and neutral sentences , respectively . the first six columns of table 4 shows : 1 ) the positive/negative precision-recall of each lexicon-based system where sentence classes were determined using the vote-flip algorithm , and 2 ) the average precision for each lexicon-based system where purity ( or negative purity ) was used to rank sentences . both the wilson et al . and wordnet lp lexicons perform at a similar level , with the former slightly better , especially in terms of precision . the web-derived lexicon , web gp , outperforms the other two lexicons across the board , in particular when looking at average precision , where the gains are near 10 % absolute . if we plot the precision-recall graphs using purity to classify sentences – as opposed to the voteflip algorithm , which only provides an unweighted classification – we can see that at almost all recall levels the web-derived lexicon has superior precision to the other lexicons ( figure 4 ) . thus , even though the web-derived lexicon is constructed from a lexical graph that contains noise , the graph propagation algorithms appear to be fairly robust to this noise and are capable of producing large and accurate polarity lexicons . the second six columns of table 4 shows the performance of each lexicon as the core of a contextual classifier ( wilson et al. , 2005 ) . a contextual classifier is a machine learned classifier that predicts the polarity of a sentence using features of that sentence and its context . for our experiments , this was a maximum entropy classifier trained and evaluated using 10-fold cross-validation on the evaluation data . the features included in the classifier were the purity score , the number of positive and negative lexicon matches , and the number of negations in the sentence , as well as concatenations of these features within the sentence and with the same features derived from the sentences in a window of size 1 . for each sentence , the contextual classifier predicted either a positive , negative or neutral classification based on the label with highest probability . additionally , all sentences were placed in the positive and negative sentence rankings by the probability the classifier assigned to the positive and negative classes , respectively . mirroring the results of wilson et al . ( 2005 ) , we see that contextual classifiers improve results substantially over lexical matching . more interestingly , we see that the a contextual classifier over the web-derived lexicons maintains the performance edge over the other lexicons , though the gap is smaller . figure 5 plots the precision-recall curves for the positive and negative sentence rankings , again showing that at almost every level of recall , the web-derived lexicon has higher precision . for a final english experiment we built a metaclassification system that is identical to the contextual classifiers , except it is trained using features derived from all lexicons . results are shown in the last row of table 4 and precision-recall curves are shown in figure 5 . not surprisingly , this system has the best performance in terms of average precision as it has access to the largest amount of information , though its performance is only slightly better than the contextual classifier for the web-derived lexicon . in this paper we examined the viability of sentiment lexicons learned semi-automatically from the web , as opposed to those that rely on manual annotation and/or resources such as wordnet . our qualitative experiments indicate that the web derived lexicon can include a wide range of phrases that have not been available to previous systems , most notably spelling variations , slang , vulgarity , and multiword expressions . quantitatively , we observed that the web derived lexicon had superior performance to previously published lexicons for english classification . ultimately , a meta classifier that incorporates features from all lexicons provides the best performance . in the future we plan to investigate the construction of web-derived lexicons for languages other than english , which is an active area of research ( mihalcea et al. , 2007 ; jijkoun and hofmann , 2009 ; rao and ravichandran , 2009 ) . the advantage of the web-derived lexicons studied here is that they do not rely on language specific resources besides unlabeled data and seed lists . a primary question is whether such lexicons improve performance over a translate-to-english strategy ( banea et al. , 2008 ) . acknowledgements : the authors thank andrew hogue , raj krishnan and deepak ravichandran for insightful discussions about this work .
the viability of web-derived polarity lexicons we examine the viability of building large polarity lexicons semi-automatically from the web . we begin by describing a graph propagation framework inspired by previous work on constructing polarity lexicons from lexical graphs ( kim and hovy , 2004 ; hu and liu , 2004 ; esuli and sabastiani , 2009 ; blairgoldensohn et al. , 2008 ; rao and ravichandran , 2009 ) . we then apply this technique to build an english lexicon that is significantly larger than those previously studied . crucially , this web-derived lexicon does not require wordnet , part-of-speech taggers , or other language-dependent resources typical of sentiment analysis systems . as a result , the lexicon is not limited to specific word classes – e.g. , adjectives that occur in wordnet – and in fact contains slang , misspellings , multiword expressions , etc . we evaluate a lexicon derived from english documents , both qualitatively and quantitatively , and show that it provides superior performance to previously studied lexicons , including one derived from wordnet . we construct a graph where the nodes are 20 million candidate words or phrases , selected using a set of heuristics including frequency and mutual information of word boundaries .
combining lexical syntactic and semantic features with maximum entropy models for information extraction extracting semantic relationships between entities is challenging because of a paucity of annotated data and the errors induced by entity detection modules . we employ maximum entropy models to combine diverse lexical , syntactic and semantic features derived from the text . our system obtained competitive results in the automatic content extraction ( ace ) evaluation . here we present our general approach and describe our ace results . extraction of semantic relationships between entities can be very useful for applications such as biography extraction and question answering , e.g . to answer queries such as β€œ where is the taj mahal ? ” . several prior approaches to relation extraction have focused on using syntactic parse trees . for the template relations task of muc-7 , bbn researchers ( miller et al. , 2000 ) augmented syntactic parse trees with semantic information corresponding to entities and relations and built generative models for the augmented trees . more recently , ( zelenko et al. , 2003 ) have proposed extracting relations by computing kernel functions between parse trees and ( culotta and sorensen , 2004 ) have extended this work to estimate kernel functions between augmented dependency trees . we build maximum entropy models for extracting relations that combine diverse lexical , syntactic and semantic features . our results indicate that using a variety of information sources can result in improved recall and overall f measure . our approach can easily scale to include more features from a multitude of sources–e.g . wordnet , gazatteers , output of other semantic taggers etc.–that can be brought to bear on this task . in this paper , we present our general approach , describe the features we currently use and show the results of our participation in the ace evaluation . automatic content extraction ( ace , 2004 ) is an evaluation conducted by nist to measure entity detection and tracking ( edt ) and relation detection and characterization ( rdc ) . the edt task entails the detection of mentions of entities and chaining them together by identifying their coreference . in ace vocabulary , entities are objects , mentions are references to them , and relations are explicitly or implicitly stated relationships among entities . entities can be of five types : persons , organizations , locations , facilities , and geo-political entities ( geographically defined regions that define a political boundary , e.g . countries , cities , etc . ) . mentions have levels : they can be names , nominal expressions or pronouns . the rdc task detects implicit and explicit relations ' between entities identified by the edt task . here is an example : the american medical association voted yesterday to install the heir apparent as its president-elect , rejecting a strong , upstart challenge by a district doctor who argued that the nation ’ s largest physicians ’ group needs stronger ethics and new leadership . in electing thomas r. reardon , an oregon general practitioner who had been the chairman of its board , ... in this fragment , all the underlined phrases are mentions referring to the american medical association , or to thomas r. reardon or the board ( an organization ) of the american medical association . moreover , there is an explicit management relation between chairman and board , which are references to thomas r. reardon and the board of the american medical association respectively . relation extraction is hard , since successful extraction implies correctly detecting both the argument mentions , correctly chaining these mentions to their rein the ace 2003 evaluation . spective entities , and correctly determining the type of relation that holds between them . this paper focuses on the relation extraction component of our ace system . the reader is referred to ( florian et al. , 2004 ; ittycheriah et al. , 2003 ; luo et al. , 2004 ) for more details of our mention detection and mention chaining modules . in the next section , we describe our extraction system . we present results in section 3 , and we conclude after making some general observations in section 4 . we built maximum entropy models for predicting the type of relation ( if any ) between every pair of mentions within each sentence . we only model explicit relations , because of poor inter-annotator agreement in the annotation of implicit relations . table 1 lists the types and subtypes of relations for the ace rdc task , along with their frequency of occurence in the ace training data2 . note that only 6 of these 24 relation types are symmetric : β€œ relative-location ” , β€œ associate ” , β€œ other-relative ” , β€œ other-professional ” , β€œ sibling ” , and β€œ spouse ” . we only model the relation subtypes , after making them unique by concatenating the type where appropriate ( e.g . β€œ other ” became β€œ other-part ” and β€œ other-role ” ) . we explicitly model the argument order of mentions . thus , when comparing mentions and , we distinguish between the case where -citizen-of- and -citizen-of- . we thus model the extraction as a classification problem with 49 classes , two for each relation subtype and a β€œ none ” class for the case where the two mentions are not related . for each pair of mentions , we compute several feature streams shown below . all the syntactic features are derived from the syntactic parse tree and the dependency tree that we compute using a statistical parser trained on the penntree bank using the maximum entropy framework ( ratnaparkhi , 1999 ) . the feature streams are : words the words of both the mentions and all the words in between . entity type the entity type ( one of person , organization , location , facility , geo-political entity or gpe ) of both the mentions . mention level the mention level ( one of name , nominal , pronoun ) of both the mentions . overlap the number of words ( if any ) separating the two mentions , the number of other mentions in between , flags indicating whether the two mentions are in the same noun phrase , verb phrase or prepositional phrase . dependency the words and part-of-speech and chunk labels of the words on which the mentions are dependent in the dependency tree derived from the syntactic parse tree . parse tree the path of non-terminals ( removing duplicates ) connecting the two mentions in the parse tree , and the path annotated with head words . here is an example . for the sentence fragment , been the chairman of its board ... the corresponding syntactic parse tree is shown in figure 1 and the dependency tree is shown in figure 2 . for the pair of mentions chairman and board , the feature streams are shown below . words , , , . overlap one-mention-in-between ( the word β€œ its ” ) , two-words-apart , in-same-noun-phrase . dependency ( word on which is depedent ) , ( pos of word on which is dependent ) , ( chunk label of word on which is dependent ) , parse tree person-np-pp-organization , person-np-pp : of-organization ( both derived from the path shown in bold in figure 1 ) . we trained maximum entropy models using features derived from the feature streams described above . we divided the ace training data provided by ldc into separate training and development sets . the training set contained around 300k words , and 9752 instances of relations and the development set contained around 46k words , and 1679 instances of relations . we report results in two ways . to isolate the perfomance of relation extraction , we measure the performance of relation extraction models on β€œ true ” mentions with β€œ true ” chaining ( i.e . as annotated by ldc annotators ) . we also measured performance of models run on the deficient output of mention detection and mention chaining modules . we report both the f-measure ' and the ace value of relation extraction . the ace value is a nist metric that assigns 0 % value for a system which produces no output and 100 % value for a system that extracts all the relations and produces no false alarms . we count the misses ; the true relations not extracted by the system , and the false alarms ; the spurious relations extracted by the system , and obtain the ace value by subtracting from 1.0 , the normalized weighted cost of the misses and false alarms . the ace value counts each relation only once , even if it was expressed many times in a document in different ways . the reader is referred to the ace web site ( ace , 2004 ) for more details . we built several models to compare the relative utility of the feature streams described in the previous section . table 2 shows the results we obtained when running on β€œ truth ” for the development set and table 3 shows the results we obtained when running on the output of mention detection and mention chaining modules . note that a model trained with only words as features obtains a very high precision and a very low recall . for example , for the mention pair his and wife with no words in between , the lexical features together with the fact that there are no words in between is sufficient ( though not necessary ) to extract the relationship between the two entities . the addition of entity types , mention levels and especially , the word proximity features ( β€œ overlap ” ) boosts the recall at the expense of the very sets with true ( t ) and system output ( s ) mentions and entities . high precision . adding the parse tree and dependency tree based features gives us our best result by exploiting the consistent syntactic patterns exhibited between mentions for some relations . note that the trends of contributions from different feature streams is consistent for the β€œ truth ” and system output runs . as expected , the numbers are significantly lower for the system output runs due to errors made by the mention detection and mention chaining modules . we ran the best model on the official ace feb ’ 2002 and ace sept ’ 2003 evaluation sets . we obtained competitive results shown in table 4 . the rules of the ace evaluation prohibit us from disclosing our final ranking and the results of other participants . we have presented a statistical approach for extracting relations where we combine diverse lexical , syntactic , and semantic features . we obtained competitive results on the ace rdc task . several previous relation extraction systems have focused almost exclusively on syntactic parse trees . we believe our approach of combining many kinds of evidence can potentially scale better to problems ( like ace ) , where we have a lot of relation types with relatively small amounts of annotated data . our system certainly benefits from features derived from parse trees , but it is not inextricably linked to them . even using very simple lexical features , we obtained high precision extractors that can potentially be used to annotate large amounts of unlabeled data for semi-supervised or unsupervised learning , without having to parse the entire data . we obtained our best results when we combined a variety of features . we thank salim roukos for several invaluable suggestions and the entire ace team at ibm for help with various components , feature suggestions and guidance .
combining lexical syntactic and semantic features with maximum entropy models for information extraction extracting semantic relationships between entities is challenging because of a paucity of annotated data and the errors induced by entity detection modules . we employ maximum entropy models to combine diverse lexical , syntactic and semantic features derived from the text . our system obtained competitive results in the automatic content extraction ( ace ) evaluation . here we present our general approach and describe our ace results . we use two kinds of features : syntactic ones and word based ones , for example , the path of the given pair of nes in the parse tree and the word n-gram between nes . we obtain improvement in results when we combine a variety of features . we achieved the f-measure of 52.8 on the 24 relation subtypes in the ace rdc 2003 corpus .
similarity of semantic relations are at least two kinds of similarity . similarity correspondence between rein contrast with which is correspondence between attributes . two words have a high degree of attributional similarity , we call them when two pairs of words have a high degree of relational similarity , we say that their relations are for example , the word pair mason : stone is analogous to the pair carpenter : wood . this article introduces latent relational analysis ( lra ) , a method for measuring relational similarity . lra has potential applications in many areas , including information extraction , word sense disambiguation , and information retrieval . recently the vector space model ( vsm ) of information retrieval has been adapted to measuring relational similarity , achieving a score of 47 % on a collection of 374 college-level multiple-choice word analogy questions . in the vsm approach , the relation between a pair of words is characterized by a vector offrequencies of predefined patterns in a large corpus . lra extends the vsm approach in three ways : ( 1 ) the patterns are derived automatically from the corpus , ( 2 ) the singular value decomposition ( svd ) is used to smooth the frequency data , and ( 3 ) automatically generated synonyms are used to explore variations of the word pairs . lra achieves 56 % on the 374 analogy questions , statistically equivalent to the average human score of 57 % . on the related problem of classifying semantic relations , lra achieves similar gains over the vsm . there are at least two kinds of similarity . relational similarity is correspondence between relations , in contrast with attributional similarity , which is correspondence between attributes . when two words have a high degree of attributional similarity , we call them synonyms . when two pairs of words have a high degree of relational similarity , we say that their relations are analogous . for example , the word pair mason : stone is analogous to the pair carpenter : wood . this article introduces latent relational analysis ( lra ) , a method for measuring relational similarity . lra has potential applications in many areas , including information extraction , word sense disambiguation , and information retrieval . recently the vector space model ( vsm ) of information retrieval has been adapted to measuring relational similarity , achieving a score of 47 % on a collection of 374 college-level multiple-choice word analogy questions . in the vsm approach , the relation between a pair of words is characterized by a vector offrequencies of predefined patterns in a large corpus . lra extends the vsm approach in three ways : ( 1 ) the patterns are derived automatically from the corpus , ( 2 ) the singular value decomposition ( svd ) is used to smooth the frequency data , and ( 3 ) automatically generated synonyms are used to explore variations of the word pairs . lra achieves 56 % on the 374 analogy questions , statistically equivalent to the average human score of 57 % . on the related problem of classifying semantic relations , lra achieves similar gains over the vsm . there are at least two kinds of similarity . attributional similarity is correspondence between attributes and relational similarity is correspondence between relations ( medin , goldstone , and gentner 1990 ) . when two words have a high degree of attributional similarity , we call them synonyms . when two word pairs have a high degree of relational similarity , we say they are analogous . verbal analogies are often written in the form a : b : :c : d , meaning a is to b as c is to d ; for example , traffic : street : :water : riverbed . traffic flows over a street ; water flows over a riverbed . a street carries traffic ; a riverbed carries water . there is a high degree of relational similarity between the word pair traffic : street and the word pair water : riverbed . in fact , this analogy is the basis of several mathematical theories of traffic flow ( daganzo 1994 ) . in section 2 , we look more closely at the connections between attributional and relational similarity . in analogies such as mason : stone : :carpenter : wood , it seems that relational similarity can be reduced to attributional similarity , since mason and carpenter are attributionally similar , as are stone and wood . in general , this reduction fails . consider the analogy traffic : street : :water : riverbed . traffic and water are not attributionally similar . street and riverbed are only moderately attributionally similar . many algorithms have been proposed for measuring the attributional similarity between two words ( lesk 1969 ; resnik 1995 ; landauer and dumais 1997 ; jiang and conrath 1997 ; lin 1998b ; turney 2001 ; budanitsky and hirst 2001 ; banerjee and pedersen 2003 ) . measures of attributional similarity have been studied extensively , due to their applications in problems such as recognizing synonyms ( landauer and dumais 1997 ) , information retrieval ( deerwester et al . 1990 ) , determining semantic orientation ( turney 2002 ) , grading student essays ( rehder et al . 1998 ) , measuring textual cohesion ( morris and hirst 1991 ) , and word sense disambiguation ( lesk 1986 ) . on the other hand , since measures of relational similarity are not as well developed as measures of attributional similarity , the potential applications of relational similarity are not as well known . many problems that involve semantic relations would benefit from an algorithm for measuring relational similarity . we discuss related problems in natural language processing , information retrieval , and information extraction in more detail in section 3 . this article builds on the vector space model ( vsm ) of information retrieval . given a query , a search engine produces a ranked list of documents . the documents are ranked in order of decreasing attributional similarity between the query and each document . almost all modern search engines measure attributional similarity using the vsm ( baeza-yates and ribeiro-neto 1999 ) . turney and littman ( 2005 ) adapt the vsm approach to measuring relational similarity . they used a vector of frequencies of patterns in a corpus to represent the relation between a pair of words . section 4 presents the vsm approach to measuring similarity . in section 5 , we present an algorithm for measuring relational similarity , which we call latent relational analysis ( lra ) . the algorithm learns from a large corpus of unlabeled , unstructured text , without supervision . lra extends the vsm approach of turney and littman ( 2005 ) in three ways : ( 1 ) the connecting patterns are derived automatically from the corpus , instead of using a fixed set of patterns . ( 2 ) singular value decomposition ( svd ) is used to smooth the frequency data . ( 3 ) given a word pair such as traffic : street , lra considers transformations of the word pair , generated by replacing one of the words by synonyms , such as traffic : road or traffic : highway . section 6 presents our experimental evaluation of lra with a collection of 374 multiple-choice word analogy questions from the sat college entrance exam.1 an example of a typical sat question appears in table 1 . in the educational testing literature , the first pair ( mason : stone ) is called the stem of the analogy . the correct choice is called the solution and the incorrect choices are distractors . we evaluate lra by testing its ability to select the solution and avoid the distractors . the average performance of collegebound senior high school students on verbal sat questions corresponds to an accuracy of about 57 % . lra achieves an accuracy of about 56 % . on these same questions , the vsm attained 47 % . one application for relational similarity is classifying semantic relations in nounmodifier pairs ( turney and littman 2005 ) . in section 7 , we evaluate the performance of lra with a set of 600 noun-modifier pairs from nastase and szpakowicz ( 2003 ) . the problem is to classify a noun-modifier pair , such as β€œ laser printer , ” according to the semantic relation between the head noun ( printer ) and the modifier ( laser ) . the 600 pairs have been manually labeled with 30 classes of semantic relations . for example , β€œ laser printer ” is classified as instrument ; the printer uses the laser as an instrument for printing . we approach the task of classifying semantic relations in noun-modifier pairs as a supervised learning problem . the 600 pairs are divided into training and testing sets and a testing pair is classified according to the label of its single nearest neighbor in the training set . lra is used to measure distance ( i.e. , similarity , nearness ) . lra achieves an accuracy of 39.8 % on the 30-class problem and 58.0 % on the 5-class problem . on the same 600 noun-modifier pairs , the vsm had accuracies of 27.8 % ( 30-class ) and 45.7 % ( 5-class ) ( turney and littman 2005 ) . we discuss the experimental results , limitations of lra , and future work in section 8 and we conclude in section 9 . in this section , we explore connections between attributional and relational similarity . medin , goldstone , and gentner ( 1990 ) distinguish attributes and relations as follows : attributes are predicates taking one argument ( e.g. , x is red , x is large ) , whereas relations are predicates taking two or more arguments ( e.g. , x collides with y , x is larger than y ) . attributes are used to state properties of objects ; relations express relations between objects or propositions . gentner ( 1983 ) notes that what counts as an attribute or a relation can depend on the context . for example , large can be viewed as an attribute of x , large ( x ) , or a relation between x and some standard y , larger than ( x , y ) . the amount of attributional similarity between two words , a and b , depends on the degree of correspondence between the properties of a and b . a measure of attributional similarity is a function that maps two words , a and b , to a real number , sima ( a , b ) e r. the more correspondence there is between the properties of a and b , the greater their attributional similarity . for example , dog and wolf have a relatively high degree of attributional similarity . the amount of relational similarity between two pairs of words , a : b and c : d , depends on the degree of correspondence between the relations between a and b and the relations between c and d. a measure of relational similarity is a function that maps two pairs , a : b and c : d , to a real number , simr ( a : b , c : d ) e r. the more correspondence there is between the relations of a : b and c : d , the greater their relational similarity . for example , dog : bark and cat : meow have a relatively high degree of relational similarity . cognitive scientists distinguish words that are semantically associated ( bee–honey ) from words that are semantically similar ( deer–pony ) , although they recognize that some words are both associated and similar ( doctor–nurse ) ( chiarello et al . 1990 ) . both of these are types of attributional similarity , since they are based on correspondence between attributes ( e.g. , bees and honey are both found in hives ; deer and ponies are both mammals ) . budanitsky and hirst ( 2001 ) describe semantic relatedness as follows : recent research on the topic in computational linguistics has emphasized the perspective of semantic relatedness of two lexemes in a lexical resource , or its inverse , semantic distance . it ’ s important to note that semantic relatedness is a more general concept than similarity ; similar entities are usually assumed to be related by virtue of their likeness ( bank–trust company ) , but dissimilar entities may also be semantically related by lexical relationships such as meronymy ( car–wheel ) and antonymy ( hot–cold ) , or just by any kind of functional relationship or frequent association ( pencil–paper , penguin–antarctica ) . as these examples show , semantic relatedness is the same as attributional similarity ( e.g. , hot and cold are both kinds of temperature , pencil and paper are both used for writing ) . here we prefer to use the term attributional similarity because it emphasizes the contrast with relational similarity . the term semantic relatedness may lead to confusion when the term relational similarity is also under discussion . resnik ( 1995 ) describes semantic similarity as follows : semantic similarity represents a special case of semantic relatedness : for example , cars and gasoline would seem to be more closely related than , say , cars and bicycles , but the latter pair are certainly more similar . rada et al . ( 1989 ) suggest that the assessment of similarity in semantic networks can in fact be thought of as involving just taxonomic ( is-a ) links , to the exclusion of other link types ; that view will also be taken here , although admittedly it excludes some potentially useful information . thus semantic similarity is a specific type of attributional similarity . the term semantic similarity is misleading , because it refers to a type of attributional similarity , yet relational similarity is not any less semantic than attributional similarity . to avoid confusion , we will use the terms attributional similarity and relational similarity , following medin , goldstone , and gentner ( 1990 ) . instead of semantic similarity ( resnik 1995 ) or semantically similar ( chiarello et al . 1990 ) , we prefer the term taxonomical similarity , which we take to be a specific type of attributional similarity . we interpret synonymy as a high degree of attributional similarity . analogy is a high degree of relational similarity . algorithms for measuring attributional similarity can be lexicon-based ( lesk 1986 ; budanitsky and hirst 2001 ; banerjee and pedersen 2003 ) , corpus-based ( lesk 1969 ; landauer and dumais 1997 ; lin 1998a ; turney 2001 ) , or a hybrid of the two ( resnik 1995 ; jiang and conrath 1997 ; turney et al . 2003 ) . intuitively , we might expect that lexicon-based algorithms would be better at capturing synonymy than corpusbased algorithms , since lexicons , such as wordnet , explicitly provide synonymy information that is only implicit in a corpus . however , experiments do not support this intuition . several algorithms have been evaluated using 80 multiple-choice synonym questions taken from the test of english as a foreign language ( toefl ) . an example of one of the 80 toefl questions appears in table 2 . table 3 shows the best performance on the toefl questions for each type of attributional similarity algorithm . the results support the claim that lexicon-based algorithms have no advantage over corpus-based algorithms for recognizing synonymy . we may distinguish near analogies ( mason : stone : :carpenter : wood ) from far analogies ( traffic : street : :water : riverbed ) ( gentner 1983 ; medin , goldstone , and gentner 1990 ) . in an analogy a : b : :c : d , where there is a high degree of relational similarity between a : b and c : d , if there is also a high degree of attributional similarity between a and c , and between b and d , then a : b : :c : d is a near analogy ; otherwise , it is a far analogy . it seems possible that sat analogy questions might consist largely of near analogies , in which case they can be solved using attributional similarity measures . we could score each candidate analogy by the average of the attributional similarity , sima , between a and c and between b and d : this kind of approach was used in two of the thirteen modules in turney et al . ( 2003 ) ( see section 3.1 ) . an example of a typical toefl question , from the collection of 80 questions . stem : levied to evaluate this approach , we applied several measures of attributional similarity to our collection of 374 sat questions . the performance of the algorithms was measured by precision , recall , and f , defined as follows : note that recall is the same as percent correct ( for multiple-choice questions , with only zero or one guesses allowed per question , but not in general ) . table 4 shows the experimental results for our set of 374 analogy questions . for example , using the algorithm of hirst and st-onge ( 1998 ) , 120 questions were answered correctly , 224 incorrectly , and 30 questions were skipped . when the algorithm assigned the same similarity to all of the choices for a given question , that question was skipped . the precision was 120/ ( 120 + 224 ) and the recall was 120/ ( 120 + 224 + 30 ) . the first five algorithms in table 4 are implemented in pedersen ’ s wordnetsimilarity package.2 the sixth algorithm ( turney 2001 ) used the waterloo multitext system ( wmts ) , as described in terra and clarke ( 2003 ) . the difference between the lowest performance ( jiang and conrath 1997 ) and random guessing is statistically significant with 95 % confidence , according to the fisher exact test ( agresti 1990 ) . however , the difference between the highest performance ( turney 2001 ) and the vsm approach ( turney and littman 2005 ) is also statistically significant with 95 % confidence . we conclude that there are enough near analogies in the 374 sat questions for attributional similarity to perform better than random guessing , but not enough near analogies for attributional similarity to perform as well as relational similarity . this section is a brief survey of the many problems that involve semantic relations and could potentially make use of an algorithm for measuring relational similarity . the problem of recognizing word analogies is , given a stem word pair and a finite list of choice word pairs , selecting the choice that is most analogous to the stem . this problem was first attempted by a system called argus ( reitman 1965 ) , using a small hand-built semantic network . argus could only solve the limited set of analogy questions that its programmer had anticipated . argus was based on a spreading activation model and did not explicitly attempt to measure relational similarity . turney et al . ( 2003 ) combined 13 independent modules to answer sat questions . the final output of the system was based on a weighted combination of the outputs of each individual module . the best of the 13 modules was the vsm , which is described in detail in turney and littman ( 2005 ) . the vsm was evaluated on a set of 374 sat questions , achieving a score of 47 % . in contrast with the corpus-based approach of turney and littman ( 2005 ) , veale ( 2004 ) applied a lexicon-based approach to the same 374 sat questions , attaining a score of 43 % . veale evaluated the quality of a candidate analogy a : b : :c : d by looking for paths in wordnet , joining a to b and c to d. the quality measure was based on the similarity between the a : b paths and the c : d paths . turney ( 2005 ) introduced latent relational analysis ( lra ) , an enhanced version of the vsm approach , which reached 56 % on the 374 sat questions . here we go beyond turney ( 2005 ) by describing lra in more detail , performing more extensive experiments , and analyzing the algorithm and related work in more depth . french ( 2002 ) cites structure mapping theory ( smt ) ( gentner 1983 ) and its implementation in the structure mapping engine ( sme ) ( falkenhainer , forbus , and gentner 1989 ) as the most influential work on modeling of analogy making . the goal of computational modeling of analogy making is to understand how people form complex , structured analogies . sme takes representations of a source domain and a target domain and produces an analogical mapping between the source and target . the domains are given structured propositional representations , using predicate logic . these descriptions include attributes , relations , and higher-order relations ( expressing relations between relations ) . the analogical mapping connects source domain relations to target domain relations . for example , there is an analogy between the solar system and rutherford ’ s model of the atom ( falkenhainer , forbus , and gentner 1989 ) . the solar system is the source domain and rutherford ’ s model of the atom is the target domain . the basic objects in the source model are the planets and the sun . the basic objects in the target model are the electrons and the nucleus . the planets and the sun have various attributes , such as mass ( sun ) and mass ( planet ) , and various relations , such as revolve ( planet , sun ) and attracts ( sun , planet ) . likewise , the nucleus and the electrons have attributes , such as charge ( electron ) and charge ( nucleus ) , and relations , such as revolve ( electron , nucleus ) and attracts ( nucleus , electron ) . sme maps revolve ( planet , sun ) to revolve ( electron , nucleus ) and attracts ( sun , planet ) to attracts ( nucleus , electron ) . each individual connection ( e.g. , from revolve ( planet , sun ) to revolve ( electron , nucleus ) ) in an analogical mapping implies that the connected relations are similar ; thus , smt requires a measure of relational similarity in order to form maps . early versions of sme only mapped identical relations , but later versions of sme allowed similar , nonidentical relations to match ( falkenhainer 1990 ) . however , the focus of research in analogy making has been on the mapping process as a whole , rather than measuring the similarity between any two particular relations ; hence , the similarity measures used in sme at the level of individual connections are somewhat rudimentary . we believe that a more sophisticated measure of relational similarity , such as lra , may enhance the performance of sme . likewise , the focus of our work here is on the similarity between particular relations , and we ignore systematic mapping between sets of relations , so lra may also be enhanced by integration with sme . metaphorical language is very common in our daily life , so common that we are usually unaware of it ( lakoff and johnson 1980 ) . gentner et al . ( 2001 ) argue that novel metaphors are understood using analogy , but conventional metaphors are simply recalled from memory . a conventional metaphor is a metaphor that has become entrenched in our language ( lakoff and johnson 1980 ) . dolan ( 1995 ) describes an algorithm that can recognize conventional metaphors , but is not suited to novel metaphors . this suggests that it may be fruitful to combine dolan ’ s ( 1995 ) algorithm for handling conventional metaphorical language with lra and sme for handling novel metaphors . lakoff and johnson ( 1980 ) give many examples of sentences in support of their claim that metaphorical language is ubiquitous . the metaphors in their sample sentences can be expressed using sat-style verbal analogies of the form a : b : :c : d. the first column in table 5 is a list of sentences from lakoff and johnson ( 1980 ) and the second column shows how the metaphor that is implicit in each sentence may be made explicit as a verbal analogy . the task of classifying semantic relations is to identify the relation between a pair of words . often the pairs are restricted to noun-modifier pairs , but there are many interesting relations , such as antonymy , that do not occur in noun-modifier pairs . however , noun-modifier pairs are interesting due to their high frequency in english . for instance , wordnet 2.0 contains more than 26,000 noun-modifier pairs , although many common noun-modifiers are not in wordnet , especially technical terms . rosario and hearst ( 2001 ) and rosario , hearst , and fillmore ( 2002 ) classify nounmodifier relations in the medical domain , using medical subject headings ( mesh ) and unified medical language system ( umls ) as lexical resources for representing each noun-modifier pair with a feature vector . they trained a neural network to distinguish 13 classes of semantic relations . nastase and szpakowicz ( 2003 ) explore a similar approach to classifying general noun-modifier pairs ( i.e. , not restricted to a particular domain , such as medicine ) , using wordnet and roget ’ s thesaurus as lexical resources . vanderwende ( 1994 ) used hand-built rules , together with a lexical knowledge base , to classify noun-modifier pairs . none of these approaches explicitly involved measuring relational similarity , but any classification of semantic relations necessarily employs some implicit notion of relational similarity since members of the same class must be relationally similar to some extent . barker and szpakowicz ( 1998 ) tried a corpus-based approach that explicitly used a measure of relational similarity , but their measure was based on literal matching , which limited its ability to generalize . moldovan et al . ( 2004 ) also used a measure of relational similarity based on mapping each noun and modifier into semantic classes in wordnet . the noun-modifier pairs were taken from a corpus , and the surrounding context in the corpus was used in a word sense disambiguation algorithm to improve the mapping of the noun and modifier into wordnet . turney and littman ( 2005 ) used the vsm ( as a component in a single nearest neighbor learning algorithm ) to measure relational similarity . we take the same approach here , substituting lra for the vsm , in section 7 . lauer ( 1995 ) used a corpus-based approach ( using the bnc ) to paraphrase noun– modifier pairs by inserting the prepositions of , for , in , at , on , from , with , and about . for example , reptile haven was paraphrased as haven for reptiles . lapata and keller ( 2004 ) achieved improved results on this task by using the database of altavista ’ s search engine as a corpus . we believe that the intended sense of a polysemous word is determined by its semantic relations with the other words in the surrounding text . if we can identify the semantic relations between the given word and its context , then we can disambiguate the given word . yarowsky ’ s ( 1993 ) observation that collocations are almost always monosemous is evidence for this view . federici , montemagni , and pirrelli ( 1997 ) present an analogybased approach to word sense disambiguation . for example , consider the word plant . out of context , plant could refer to an industrial plant or a living organism . suppose plant appears in some text near food . a typical approach to disambiguating plant would compare the attributional similarity of food and industrial plant to the attributional similarity of food and living organism ( lesk 1986 ; banerjee and pedersen 2003 ) . in this case , the decision may not be clear , since industrial plants often produce food and living organisms often serve as food . it would be very helpful to know the relation between food and plant in this example . in the phrase β€œ food for the plant , ” the relation between food and plant strongly suggests that the plant is a living organism , since industrial plants do not need food . in the text β€œ food at the plant , ” the relation strongly suggests that the plant is an industrial plant , since living organisms are not usually considered as locations . thus , an algorithm for classifying semantic relations ( as in section 7 ) should be helpful for word sense disambiguation . the problem of relation extraction is , given an input document and a specific relation r , to extract all pairs of entities ( if any ) that have the relation r in the document . the problem was introduced as part of the message understanding conferences ( muc ) in 1998 . zelenko , aone , and richardella ( 2003 ) present a kernel method for extracting the relations person–affiliation and organization–location . for example , in the sentence john smith is the chief scientist of the hardcom corporation , there is a person–affiliation relation between john smith and hardcom corporation ( zelenko , aone , and richardella 2003 ) . this is similar to the problem of classifying semantic relations ( section 3.4 ) , except that information extraction focuses on the relation between a specific pair of entities in a specific document , rather than a general pair of words in general text . therefore an algorithm for classifying semantic relations should be useful for information extraction . in the vsm approach to classifying semantic relations ( turney and littman 2005 ) , we would have a training set of labeled examples of the relation person–affiliation , for instance . each example would be represented by a vector of pattern frequencies . given a specific document discussing john smith and hardcom corporation , we could construct a vector representing the relation between these two entities and then measure the relational similarity between this unlabeled vector and each of our labeled training vectors . it would seem that there is a problem here because the training vectors would be relatively dense , since they would presumably be derived from a large corpus , but the new unlabeled vector for john smith and hardcom corporation would be very sparse , since these entities might be mentioned only once in the given document . however , this is not a new problem for the vsm ; it is the standard situation when the vsm is used for information retrieval . a query to a search engine is represented by a very sparse vector , whereas a document is represented by a relatively dense vector . there are well-known techniques in information retrieval for coping with this disparity , such as weighting schemes for query vectors that are different from the weighting schemes for document vectors ( salton and buckley 1988 ) . in their article on classifying semantic relations , moldovan et al . ( 2004 ) suggest that an important application of their work is question answering ( qa ) . as defined in the text retrieval conference ( trec ) qa track , the task is to answer simple questions , such as β€œ where have nuclear incidents occurred ? ” , by retrieving a relevant document from a large corpus and then extracting a short string from the document , such as the three mile island nuclear incident caused a doe policy crisis . moldovan et al . ( 2004 ) propose to map a given question to a semantic relation and then search for that relation in a corpus of semantically tagged text . they argue that the desired semantic relation can easily be inferred from the surface form of the question . a question of the form β€œ where ... ? ” is likely to be looking for entities with a location relation and a question of the form β€œ what did ... make ? ” is likely to be looking for entities with a product relation . in section 7 , we show how lra can recognize relations such as location and product ( see table 19 ) . hearst ( 1992 ) presents an algorithm for learning hyponym ( type of ) relations from a corpus and berland and charniak ( 1999 ) describe how to learn meronym ( part of ) relations from a corpus . these algorithms could be used to automatically generate a thesaurus or dictionary , but we would like to handle more relations than hyponymy and meronymy . wordnet distinguishes more than a dozen semantic relations between words ( fellbaum 1998 ) and nastase and szpakowicz ( 2003 ) list 30 semantic relations for noun-modifier pairs . hearst and berland and charniak ( 1999 ) use manually generated rules to mine text for semantic relations . turney and littman ( 2005 ) also use a manually generated set of 64 patterns . lra does not use a predefined set of patterns ; it learns patterns from a large corpus . instead of manually generating new rules or patterns for each new semantic relation , it is possible to automatically learn a measure of relational similarity that can handle arbitrary semantic relations . a nearest neighbor algorithm can then use this relational similarity measure to learn to classify according to any set of classes of relations , given the appropriate labeled training data . girju , badulescu , and moldovan ( 2003 ) present an algorithm for learning meronym relations from a corpus . like hearst ( 1992 ) and berland and charniak ( 1999 ) , they use manually generated rules to mine text for their desired relation . however , they supplement their manual rules with automatically learned constraints , to increase the precision of the rules . veale ( 2003 ) has developed an algorithm for recognizing certain types of word analogies , based on information in wordnet . he proposes to use the algorithm for analogical information retrieval . for example , the query muslim church should return mosque and the query hindu bible should return the vedas . the algorithm was designed with a focus on analogies of the form adjective : noun : :adjective : noun , such as christian : church : :muslim : mosque . a measure of relational similarity is applicable to this task . given a pair of words , a and b , the task is to return another pair of words , x and y , such that there is high relational similarity between the pair a : x and the pair y : b . for example , given a = muslim and b = church , return x = mosque and y = christian . ( the pair muslim : mosque has a high relational similarity to the pair christian : church . ) marx et al . ( 2002 ) developed an unsupervised algorithm for discovering analogies by clustering words from two different corpora . each cluster of words in one corpus is coupled one-to-one with a cluster in the other corpus . for example , one experiment used a corpus of buddhist documents and a corpus of christian documents . a cluster of words such as { hindu , mahayana , zen , ... } from the buddhist corpus was coupled with a cluster of words such as { catholic , protestant , ... } from the christian corpus . thus the algorithm appears to have discovered an analogical mapping between buddhist schools and traditions and christian schools and traditions . this is interesting work , but it is not directly applicable to sat analogies , because it discovers analogies between clusters of words rather than individual words . a semantic frame for an event such as judgement contains semantic roles such as judge , evaluee , and reason , whereas an event such as statement contains roles such as speaker , addressee , and message ( gildea and jurafsky 2002 ) . the task of identifying semantic roles is to label the parts of a sentence according to their semantic roles . we believe that it may be helpful to view semantic frames and their semantic roles as sets of semantic relations ; thus , a measure of relational similarity should help us to identify semantic roles . moldovan et al . ( 2004 ) argue that semantic roles are merely a special case of semantic relations ( section 3.4 ) , since semantic roles always involve verbs or predicates , but semantic relations can involve words of any part of speech . this section examines past work on measuring attributional and relational similarity using the vsm . the vsm was first developed for information retrieval ( salton and mcgill 1983 ; salton and buckley 1988 ; salton 1989 ) and it is at the core of most modern search engines ( baeza-yates and ribeiro-neto 1999 ) . in the vsm approach to information retrieval , queries and documents are represented by vectors . elements in these vectors are based on the frequencies of words in the corresponding queries and documents . the frequencies are usually transformed by various formulas and weights , tailored to improve the effectiveness of the search engine ( salton 1989 ) . the attributional similarity between a query and a document is measured by the cosine of the angle between their corresponding vectors . for a given query , the search engine sorts the matching documents in order of decreasing cosine . the vsm approach has also been used to measure the attributional similarity of words ( lesk 1969 ; ruge 1992 ; pantel and lin 2002 ) . pantel and lin ( 2002 ) clustered words according to their attributional similarity , as measured by a vsm . their algorithm is able to discover the different senses of polysemous words , using unsupervised learning . latent semantic analysis enhances the vsm approach to information retrieval by using the singular value decomposition ( svd ) to smooth the vectors , which helps to handle noise and sparseness in the data ( deerwester et al . 1990 ; dumais 1993 ; landauer and dumais 1997 ) . svd improves both document-query attributional similarity measures ( deerwester et al . 1990 ; dumais 1993 ) and word–word attributional similarity measures ( landauer and dumais 1997 ) . lra also uses svd to smooth vectors , as we discuss in section 5 . let r1 be the semantic relation ( or set of relations ) between a pair of words , a and b , and let r2 be the semantic relation ( or set of relations ) between another pair , c and d. we wish to measure the relational similarity between r1 and r2 . the relations r1 and r2 are not given to us ; our task is to infer these hidden ( latent ) relations and then compare them . in the vsm approach to relational similarity ( turney and littman 2005 ) , we create vectors , r1 and r2 , that represent features of r1 and r2 , and then measure the similarity of r1 and r2 by the cosine of the angle 0 between r1 and r2 : we create a vector , r , to characterize the relationship between two words , x and y , by counting the frequencies of various short phrases containing x and y. turney and littman ( 2005 ) use a list of 64 joining terms , such as of , for , and to , to form 128 phrases that contain x and y , such as x of y , y of x , x for y , y for x , x to y , and y to x . these phrases are then used as queries for a search engine and the number of hits ( matching documents ) is recorded for each query . this process yields a vector of 128 numbers . if the number of hits for a query is x , then the corresponding element in the vector r is log ( x + 1 ) . several authors report that the logarithmic transformation of frequencies improves cosine-based similarity measures ( salton and buckley 1988 ; ruge 1992 ; lin 1998b ) . turney and littman ( 2005 ) evaluated the vsm approach by its performance on 374 sat analogy questions , achieving a score of 47 % . since there are five choices for each question , the expected score for random guessing is 20 % . to answer a multiple-choice analogy question , vectors are created for the stem pair and each choice pair , and then cosines are calculated for the angles between the stem pair and each choice pair . the best guess is the choice pair with the highest cosine . we use the same set of analogy questions to evaluate lra in secti on 6 . computational linguistics volume 32 , number 3 the vsm was also evaluated by its performance as a distance ( nearness ) measure in a supervised nearest neighbor classifier for noun-modifier semantic relations ( turney and littman 2005 ) . the evaluation used 600 hand-labeled noun-modifier pairs from nastase and szpakowicz ( 2003 ) . a testing pair is classified by searching for its single nearest neighbor in the labeled training data . the best guess is the label for the training pair with the highest cosine . lra is evaluated with the same set of noun-modifier pairs in section 7 . turney and littman ( 2005 ) used the altavista search engine to obtain the frequency information required to build vectors for the vsm . thus their corpus was the set of all web pages indexed by altavista . at the time , the english subset of this corpus consisted of about 5 Γ— 1011 words . around april 2004 , altavista made substantial changes to their search engine , removing their advanced search operators . their search engine no longer supports the asterisk operator , which was used by turney and littman ( 2005 ) for stemming and wild-card searching . altavista also changed their policy toward automated searching , which is now forbidden.3 turney and littman ( 2005 ) used altavista ’ s hit count , which is the number of documents ( web pages ) matching a given query , but lra uses the number of passages ( strings ) matching a query . in our experiments with lra ( sections 6 and 7 ) , we use a local copy of the waterloo multitext system ( wmts ) ( clarke , cormack , and palmer 1998 ; terra and clarke 2003 ) , running on a 16 cpu beowulf cluster , with a corpus of about 5 Γ— 1010 english words . the wmts is a distributed ( multiprocessor ) search engine , designed primarily for passage retrieval ( although document retrieval is possible , as a special case of passage retrieval ) . the text and index require approximately one terabyte of disk space . although altavista only gives a rough estimate of the number of matching documents , the wmts gives exact counts of the number of matching passages . turney et al . ( 2003 ) combine 13 independent modules to answer sat questions . the performance of lra significantly surpasses this combined system , but there is no real contest between these approaches , because we can simply add lra to the combination , as a fourteenth module . since the vsm module had the best performance of the 13 modules ( turney et al . 2003 ) , the following experiments focus on comparing vsm and lra . lra takes as input a set of word pairs and produces as output a measure of the relational similarity between any two of the input pairs . lra relies on three resources , a search engine with a very large corpus of text , a broad-coverage thesaurus of synonyms , and an efficient implementation of svd . we first present a short description of the core algorithm . later , in the following subsections , we will give a detailed description of the algorithm , as it is applied in the experiments in sections 6 and 7. intended to form near analogies with the corresponding original pairs ( see section 2.3 ) . the motivation for the alternate pairs is to handle cases where the original pairs cooccur rarely in the corpus . the hope is that we can find near analogies for the original pairs , such that the near analogies co-occur more frequently in the corpus . the danger is that the alternates may have different relations from the originals . the filtering steps above aim to reduce this risk . in our experiments , the input set contains from 600 to 2,244 word pairs . the output similarity measure is based on cosines , so the degree of similarity can range from βˆ’1 ( dissimilar ; 0 = 180Β° ) to +1 ( similar ; 0 = 0Β° ) . before applying svd , the vectors are completely non-negative , which implies that the cosine can only range from 0 to +1 , but svd introduces negative values , so it is possible for the cosine to be negative , although we have never observed this in our experiments . in the following experiments , we use a local copy of the wmts ( clarke , cormack , and palmer 1998 ; terra and clarke 2003 ) .4 the corpus consists of about 5 x 1010 english words , gathered by a web crawler , mainly from us academic web sites . the web pages cover a very wide range of topics , styles , genres , quality , and writing skill . the wmts is well suited to lra , because the wmts scales well to large corpora ( one terabyte , in our case ) , it gives exact frequency counts ( unlike most web search engines ) , it is designed for passage retrieval ( rather than document retrieval ) , and it has a powerful query syntax . as a source of synonyms , we use lin ’ s ( 1998a ) automatically generated thesaurus . this thesaurus is available through an on-line interactive demonstration or it can be downloaded.5 we used the on-line demonstration , since the downloadable version seems to contain fewer words . for each word in the input set of word pairs , we automatically query the on-line demonstration and fetch the resulting list of synonyms . as a courtesy to other users of lin ’ s on-line system , we insert a 20-second delay between each two queries . lin ’ s thesaurus was generated by parsing a corpus of about 5 x 107 english words , consisting of text from the wall street journal , san jose mercury , and ap newswire ( lin 1998a ) . the parser was used to extract pairs of words and their grammatical relations . words were then clustered into synonym sets , based on the similarity of their grammatical relations . two words were judged to be highly similar when they tended to have the same kinds of grammatical relations with the same sets of words . given a word and its part of speech , lin ’ s thesaurus provides a list of words , sorted in order of decreasing attributional similarity . this sorting is convenient for lra , since it makes it possible to focus on words with higher attributional similarity and ignore the rest . wordnet , in contrast , given a word and its part of speech , provides a list of words grouped by the possible senses of the given word , with groups sorted by the frequencies of the senses . wordnet ’ s sorting does not directly correspond to sorting by degree of attributional similarity , although various algorithms have been proposed for deriving attributional similarity from wordnet ( resnik 1995 ; jiang and conrath 1997 ; budanitsky and hirst 2001 ; banerjee and pedersen 2003 ) . we use rohde ’ s svdlibc implementation of the svd , which is based on svdpackc ( berry 1992 ) .6 in lra , svd is used to reduce noise and compensate for sparseness . we will go through each step of lra , using an example to illustrate the steps . assume that the input to lra is the 374 multiple-choice sat word analogy questions of turney and littman ( 2005 ) . since there are six word pairs per question ( the stem and five choices ) , the input consists of 2,244 word pairs . let ’ s suppose that we wish to calculate the relational similarity between the pair quart : volume and the pair mile : distance , taken from the sat question in table 6 . the lra algorithm consists of the following 12 steps : alternates as follows . for each alternate pair , send a query to the wmts , to find the frequency of phrases that begin with one member of the pair and end with the other . the phrases can not have more than max phrase words ( we use max phrase = 5 ) . sort the alternate pairs by the frequency of their phrases . select the top num filter most frequent alternates and discard the remainder ( we use num filter = 3 , so 17 alternates are dropped ) . this step tends to eliminate alternates that have no clear semantic relation . the third column in table 7 shows the frequency with which each pair co-occurs in a window of max phrase words . the last column in table 7 shows the pairs that are selected . alternate forms of the original pair quart : volume . the first column shows the original pair and the alternate pairs . the second column shows lin ’ s similarity score for the alternate word compared to the original word . for example , the similarity between quart and pint is 0.210 . the third column shows the frequency of the pair in the wmts corpus . the fourth column shows the pairs that pass the filtering step ( i.e. , step 2 ) . a given pair . the phrases can not have more than max phrase words and there must be at least one word between the two members of the word pair . these phrases give us information about the semantic relations between the words in each pair . a phrase with no words between the two members of the word pair would give us very little information about the semantic relations ( other than that the words occur together with a certain frequency in a certain order ) . table 8 gives some examples of phrases in the corpus that match the pair quart : volume . 4 . find patterns : for each phrase found in the previous step , build patterns from the intervening words . a pattern is constructed by replacing any or all or none of the intervening words with wild cards ( one wild card can some examples of phrases that contain quart : volume . suffixes are ignored when searching for matching phrases in the wmts corpus . at least one word must occur between quart and volume . at most max phrase words can appear in a phrase . quarts liquid volume volume in quarts quarts of volume volume capacity quarts quarts in volume volume being about two quarts quart total volume volume of milk in quarts quart of spray volume volume include measures like quart replace only one word ) . if a phrase is n words long , there are n βˆ’ 2 intervening words between the members of the given word pair ( e.g. , between quart and volume ) . thus a phrase with n words generates 2 ( nβˆ’2 ) patterns . ( we use max phrase = 5 , so a phrase generates at most eight patterns . ) for each pattern , count the number of pairs ( originals and alternates ) with phrases that match the pattern ( a wild card must match exactly one word ) . keep the top num patterns most frequent patterns and discard the rest ( we use num patterns = 4 , 000 ) . typically there will be millions of patterns , so it is not feasible to keep them all . more weight to columns ( patterns ) with frequencies that vary substantially from one row ( word pair ) to the next , and less weight to columns that are uniform . therefore we weight the cell xi , j by wj = 1 βˆ’ hj/ log ( m ) , which varies from 0 when pi , j is uniform to 1 when entropy is minimal . we also apply the log transformation to frequencies , log ( xi , j + 1 ) . ( entropy is calculated with the original frequency values , before the log transformation is applied . ) for all i and all j , replace the original value xi , j in x by the new value wj log ( xi , j + 1 ) . this is an instance of the term frequency-inverse document frequency ( tf-idf ) family of transformations , which is familiar in information retrieval ( salton and buckley 1988 ; baeza-yates and ribeiro-neto 1999 ) : log ( xi , j + 1 ) is the tf term and wj is the idf term . approximates the original matrix x , in the sense that it minimizes the οΏ½ οΏ½ approximation errors . that is , xΛ† = ukekvt k minimizes οΏ½ xΛ† βˆ’ x οΏ½f over all matrices xΛ† of rank k , where 11. . .οΏ½f denotes the frobenius norm ( golub and van loan 1996 ) . we may think of this matrix ukekvtk as a β€œ smoothed ” or β€œ compressed ” version of the original matrix . in the subsequent steps , we will be calculating cosines for row vectors . for this purpose , we can simplify calculations by dropping v. the cosine of two vectors is their dot product , after they have been normalized to unit length . the matrix xxt contains the dot products of all of the row vectors . we can find the dot product of the ith and jth row vectors by looking at the cell in row i , column j of the matrix xxt . since vtv = i , we have xxt = uevt ( uevt ) t = uevtvetut = ue ( ue ) t , which means that we can calculate cosines with the smaller matrix ue , instead of using x = uevt ( deerwester et al . 1990 ) . 10 . projection : calculate ukek ( we use k = 300 ) . this matrix has the same number of rows as x , but only k columns ( instead of 2 x num patterns columns ; in our experiments , that is 300 columns instead of 8,000 ) . we can compare two word pairs by calculating the cosine of the corresponding row vectors in ukek . the row vector for each word pair has been projected from the original 8,000 dimensional space into a new 300 dimensional space . the value k = 300 is recommended by landauer and dumais ( 1997 ) for measuring the attributional similarity between words . we investigate other values in section 6.4 . therefore we have ( num filter + 1 ) 2 ways to compare a version of a : b with a version of c : d. look for the row vectors in ukek that correspond to the versions of a : b and the versions of c : d and calculate the ( num filter + 1 ) 2 cosines ( in our experiments , there are 16 cosines ) . for example , suppose a : b is quart : volume and c : d is mile : distance . table 10 gives the cosines for the sixteen combinations . 12 . calculate relational similarity : the relational similarity between a : b and c : d is the average of the cosines , among the ( num filter + 1 ) 2 cosines from step 11 , that are greater than or equal to the cosine of the original pairs , a : b and c : d. the requirement that the cosine must be greater than or equal to the original cosine is a way of filtering out poor analogies , which may be introduced in step 1 and may have slipped through the filtering in step 2 . averaging the cosines , as opposed to taking their maximum , is intended to provide some resistance to noise . for quart : volume and mile : distance , the third column in table 10 shows which alternates are used to calculate the average . for these two pairs , the average of the selected cosines is 0.677 . in table 7 , we see that pumping : volume has slipped through the filtering in step 2 , although it is not a good alternate for quart : volume . however , table 10 shows that all four analogies that involve pumping : volume are dropped here , in step 12 . steps 11 and 12 can be repeated for each two input pairs that are to be compared . this completes the description of lra . table 11 gives the cosines for the sample sat question . the choice pair with the highest average cosine ( the choice with the largest value in column 1 ) , choice ( b ) , is the solution for this question ; lra answers the question correctly . for comparison , column 2 gives the cosines for the original pairs and column 3 gives the highest cosine . for this particular sat question , there is one choice that has the highest cosine for all three columns , choice ( b ) , although this is not true in general . note that the gap between the first choice ( b ) and the second choice ( d ) is largest for the average cosines ( column 1 ) . this suggests that the average of the cosines ( column 1 ) is better at discriminating the correct choice than either the original cosine ( column 2 ) or the highest cosine ( column 3 ) . this section presents various experiments with 374 multiple-choice sat word analogy questions . table 12 shows the performance of the baseline lra system on the 374 sat questions , using the parameter settings and configuration described in section 5 . lra correctly answered 210 of the 374 questions ; 160 questions were answered incorrectly and 4 questions were skipped , because the stem pair and its alternates were represented by zero vectors . the performance of lra is significantly better than the lexicon-based approach of veale ( 2004 ) ( see section 3.1 ) and the best performance using attributional similarity ( see section 2.3 ) , with 95 % confidence , according to the fisher exact test ( agresti 1990 ) . as another point of reference , consider the simple strategy of always guessing the choice with the highest co-occurrence frequency . the idea here is that the words in the solution pair may occur together frequently , because there is presumably a clear and meaningful relation between the solution words , whereas the distractors may only occur together rarely because they have no meaningful relation . this strategy is signifcantly worse than random guessing . the opposite strategy , always guessing the choice pair with the lowest co-occurrence frequency , is also worse than random guessing ( but not significantly ) . it appears that the designers of the sat questions deliberately chose distractors that would thwart these two strategies . with 374 questions and six word pairs per question ( one stem and five choices ) , there are 2,244 pairs in the input set . in step 2 , introducing alternate pairs multiplies the number of pairs by four , resulting in 8,976 pairs . in step 5 , for each pair a : b , we add b : a , yielding 17,952 pairs . however , some pairs are dropped because they correspond to zero vectors ( they do not appear together in a window of five words in the wmts corpus ) . also , a few words do not appear in lin ’ s thesaurus , and some word pairs appear twice in the sat questions ( e.g. , lion : cat ) . the sparse matrix ( step 7 ) has 17,232 rows ( word pairs ) and 8,000 columns ( patterns ) , with a density of 5.8 % ( percentage of nonzero values ) . table 13 gives the time required for each step of lra , a total of almost 9 days . all of the steps used a single cpu on a desktop computer , except step 3 , finding the phrases for each word pair , which used a 16 cpu beowulf cluster . most of the other steps are parallelizable ; with a bit of programming effort , they could also be executed on the beowulf cluster . all cpus ( both desktop and cluster ) were 2.4 ghz intel xeons . the desktop computer had 2 gb of ram and the cluster had a total of 16 gb of ram . from turney and littman ( 2005 ) . as mentioned in section 4.2 , we estimate this corpus contained about 5 Γ— 1011 english words at the time the vsm-av experiments took place . vsm-wmts refers to the vsm using the wmts , which contains about 5 Γ— 1010 english words . we generated the vsm-wmts results by adapting the vsm to the wmts . the algorithm is slightly different from turney and littman ’ s ( 2005 ) , because we used passage frequencies instead of document frequencies . all three pairwise differences in recall in table 14 are statistically significant with 95 % confidence , using the fisher exact test ( agresti 1990 ) . the pairwise differences in precision between lra and the two vsm variations are also significant , but the difference in precision between the two vsm variations ( 42.4 % vs. 47.7 % ) is not significant . although vsm-av has a corpus 10 times larger than lra ’ s , lra still performs better than vsm-av . comparing vsm-av to vsm-wmts , the smaller corpus has reduced the score of the vsm , but much of the drop is due to the larger number of questions that were skipped ( 34 for vsm-wmts versus 5 for vsm-av ) . with the smaller corpus , many more of the input word pairs simply do not appear together in short phrases in the corpus . lra is able to answer as many questions as vsm-av , although it uses the same corpus as vsm-wmts , because lin ’ s thesaurus allows lra to substitute synonyms for words that are not in the corpus . vsm-av required 17 days to process the 374 analogy questions ( turney and littman 2005 ) , compared to 9 days for lra . as a courtesy to altavista , turney and littman ( 2005 ) inserted a 5-second delay between each two queries . since the wmts is running locally , there is no need for delays . vsm-wmts processed the questions in only one day . the average performance of college-bound senior high school students on verbal sat questions corresponds to a recall ( percent correct ) of about 57 % ( turney and littman 2005 ) . the sat i test consists of 78 verbal questions and 60 math questions ( there is also an sat ii test , covering specific subjects , such as chemistry ) . analogy questions are only a subset of the 78 verbal sat questions . if we assume that the difficulty of our 374 analogy questions is comparable to the difficulty of the 78 verbal sat i questions , then we can estimate that the average college-bound senior would correctly answer about 57 % of the 374 analogy questions . of our 374 sat questions , 190 are from a collection of ten official sat tests ( claman 2000 ) . on this subset of the questions , lra has a recall of 61.1 % , compared to a recall of 51.1 % on the other 184 questions . the 184 questions that are not from claman ( 2000 ) seem to be more difficult . this indicates that we may be underestimating how well lra performs , relative to college-bound senior high school students . claman ( 2000 ) suggests that the analogy questions may be somewhat harder than other verbal sat questions , so we may be slightly overestimating the mean human score on the analogy questions . table 15 gives the 95 % confidence intervals for lra , vsm-av , and vsm-wmts , calculated by the binomial exact test ( agresti 1990 ) . there is no significant difference between lra and human performance , but vsm-av and vsm-wmts are significantly below human-level performance . there are several parameters in the lra algorithm ( see section 5.5 ) . the parameter values were determined by trying a small number of possible values on a small set of questions that were set aside . since lra is intended to be an unsupervised learning algorithm , we did not attempt to tune the parameter values to maximize the precision and recall on the 374 sat questions . we hypothesized that lra is relatively insensitive to the values of the parameters . table 16 shows the variation in the performance of lra as the parameter values are adjusted . we take the baseline parameter settings ( given in section 5.5 ) and vary each parameter , one at a time , while holding the remaining parameters fixed at their baseline values . none of the precision and recall values are significantly different from the baseline , according to the fisher exact test ( agresti 1990 ) , at the 95 % confidence level . this supports the hypothesis that the algorithm is not sensitive to the parameter values . although a full run of lra on the 374 sat questions takes 9 days , for some of the parameters it is possible to reuse cached data from previous runs . we limited the experiments with num sim and max phrase because caching was not as helpful for these parameters , so experimenting with them required several weeks . as mentioned in the introduction , lra extends the vsm approach of turney and littman ( 2005 ) by ( 1 ) exploring variations on the analogies by replacing words with synonyms ( step 1 ) , ( 2 ) automatically generating connecting patterns ( step 4 ) , and ( 3 ) smoothing the data with svd ( step 9 ) . in this subsection , we ablate each of these three components to assess their contribution to the performance of lra . table 17 shows the results . without svd ( compare column 1 to 2 in table 17 ) , performance drops , but the drop is not statistically significant with 95 % confidence , according to the fisher exact test ( agresti 1990 ) . however , we hypothesize that the drop in performance would be significant with a larger set of word pairs . more word pairs would increase the sample size , which would decrease the 95 % confidence interval , which would likely show that svd is making a significant contribution . furthermore , more word pairs would increase the matrix size , which would give svd more leverage . for example , landauer and dumais ( 1997 ) apply svd to a matrix of 30,473 columns by 60,768 rows , but our matrix here is 8,000 columns by 17,232 rows . we are currently gathering more sat questions to test this hypothesis . without synonyms ( compare column 1 to 3 in table 17 ) , recall drops significantly ( from 56.1 % to 49.5 % ) , but the drop in precision is not significant . when the synonym component is dropped , the number of skipped questions rises from 4 to 22 , which demonstrates the value of the synonym component of lra for compensating for sparse data . when both svd and synonyms are dropped ( compare column 1 to 4 in table 17 ) , the decrease in recall is significant , but the decrease in precision is not significant . again , we believe that a larger sample size would show that the drop in precision is significant . if we eliminate both synonyms and svd from lra , all that distinguishes lra from vsm-wmts is the patterns ( step 4 ) . the vsm approach uses a fixed list of 64 patterns to generate 128 dimensional vectors ( turney and littman 2005 ) , whereas lra uses a dynamically generated set of 4,000 patterns , resulting in 8,000 dimensional vectors . we can see the value of the automatically generated patterns by comparing lra without synonyms and svd ( column 4 ) to vsm-wmts ( column 5 ) . the difference in both precision and recall is statistically significant with 95 % confidence , according to the fisher exact test ( agresti 1990 ) . the ablation experiments support the value of the patterns ( step 4 ) and synonyms ( step 1 ) in lra , but the contribution of svd ( step 9 ) has not been proven , although we believe more data will support its effectiveness . nonetheless , the three components together result in a 16 % increase in f ( compare column 1 to 5 ) . we know a priori that , if a : b : :c : d , then b : a : :d : c. for example , mason is to stone as carpenter is to wood implies stone is to mason as wood is to carpenter . therefore , a good measure of relational similarity , simr , should obey the following equation : in steps 5 and 6 of the lra algorithm ( section 5.5 ) , we ensure that the matrix x is symmetrical , so that equation ( 8 ) is necessarily true for lra . the matrix is designed so that the row vector for a : b is different from the row vector for b : a only by a permutation of the elements . the same permutation distinguishes the row vectors for c : d and d : c. therefore the cosine of the angle between a : b and c : d must be identical to the cosine of the angle between b : a and d : c ( see equation ( 7 ) ) . to discover the consequences of this design decision , we altered steps 5 and 6 so that symmetry is no longer preserved . in step 5 , for each word pair a : b that appears in the input set , we only have one row . there is no row for b : a unless b : a also appears in the input set . thus the number of rows in the matrix dropped from 17,232 to 8,616 . in step 6 , we no longer have two columns for each pattern p , one for β€œ word1 p word2 ” and another for β€œ word2 p word1. ” however , to be fair , we kept the total number of columns at 8,000 . in step 4 , we selected the top 8,000 patterns ( instead of the top 4,000 ) , distinguishing the pattern β€œ word1 p word2 ” from the pattern β€œ word2 p word1 ” ( instead of considering them equivalent ) . thus a pattern p with a high frequency is likely to appear in two columns , in both possible orders , but a lower frequency pattern might appear in only one column , in only one possible order . these changes resulted in a slight decrease in performance . recall dropped from 56.1 % to 55.3 % and precision dropped from 56.8 % to 55.9 % . the decrease is not statistically significant . however , the modified algorithm no longer obeys equation ( 8 ) . although dropping symmetry appears to cause no significant harm to the performance of the algorithm on the sat questions , we prefer to retain symmetry , to ensure that equation ( 8 ) is satisfied . note that , if a : b : :c : d , it does not follow that b : a : :c : d. for example , it is false that β€œ stone is to mason as carpenter is to wood. ” in general ( except when the semantic relations between a and b are symmetrical ) , we have the following inequality : therefore we do not want a : b and b : a to be represented by identical row vectors , although it would ensure that equation ( 8 ) is satisfied . in step 12 of lra , the relational similarity between a : b and c : d is the average of the cosines , among the ( num filter + 1 ) 2 cosines from step 11 , that are greater than or equal to the cosine of the original pairs , a : b and c : d. that is , the average includes only those alternates that are β€œ better ” than the originals . taking all alternates instead of the better alternates , recall drops from 56.1 % to 40.4 % and precision drops from 56.8 % to 40.8 % . both decreases are statistically significant with 95 % confidence , according to the fisher exact test ( agresti 1990 ) . suppose a word pair a : b corresponds to a vector r in the matrix x . it would be convenient if inspection of r gave us a simple explanation or description of the relation between a and b . for example , suppose the word pair ostrich : bird maps to the row vector r. it would be pleasing to look in r and find that the largest element corresponds to the pattern β€œ is the largest ” ( i.e. , β€œ ostrich is the largest bird ” ) . unfortunately , inspection of r reveals no such convenient patterns . we hypothesize that the semantic content of a vector is distributed over the whole vector ; it is not concentrated in a few elements . to test this hypothesis , we modified step 10 of lra . instead of projecting the 8,000 dimensional vectors into the 300 dimensional space ukek , we use the matrix ukekvtk . this matrix yields the same cosines as ukek , but preserves the original 8,000 dimensions , making it easier to interpret the row vectors . for each row vector in ukekvtk , we select the n largest values and set all other values to zero . the idea here is that we will only pay attention to the n most important patterns in r ; the remaining patterns will be ignored . this reduces the length of the row vectors , but the cosine is the dot product of normalized vectors ( all vectors are normalized to unit length ; see equation ( 7 ) ) , so the change to the vector lengths has no impact ; only the angle of the vectors is important . if most of the semantic content is in the n largest elements of r , then setting the remaining elements to zero should have relatively little impact . table 18 shows the performance as n varies from 1 to 3,000 . the precision and recall are significantly below the baseline lra until n β‰₯ 300 ( 95 % confidence , fisher exact test ) . in other words , for a typical sat analogy question , we need to examine the top 300 patterns to explain why lra selected one choice instead of another . we are currently working on an extension of lra that will explain with a single pattern why one choice is better than another . we have had some promising results , but this work is not yet mature . however , we can confidently claim that interpreting the vectors is not trivial . turney and littman ( 2005 ) used 64 manually generated patterns , whereas lra uses 4,000 automatically generated patterns . we know from section 6.5 that the automatically generated patterns are significantly better than the manually generated patterns . it may be interesting to see how many of the manually generated patterns appear within the automatically generated patterns . if we require an exact match , 50 of the 64 manual patterns can be found in the automatic patterns . if we are lenient about wildcards , and count the pattern not the as matching * not the ( for example ) , then 60 of the 64 manual patterns appear within the automatic patterns . this suggests that the improvement in performance with the automatic patterns is due to the increased quantity of patterns , rather than a qualitative difference in the patterns . turney and littman ( 2005 ) point out that some of their 64 patterns have been used by other researchers . for example , hearst ( 1992 ) used the pattern such as to discover hyponyms and berland and charniak ( 1999 ) used the pattern of the to discover meronyms . both of these patterns are included in the 4,000 patterns automatically generated by lra . the novelty in turney and littman ( 2005 ) is that their patterns are not used to mine text for instances of word pairs that fit the patterns ( hearst 1992 ; berland and charniak 1999 ) ; instead , they are used to gather frequency data for building vectors that represent the relation between a given pair of words . the results in section 6.8 show that a vector contains more information than any single pattern or small set of patterns ; a vector is a distributed representation . lra is distinct from hearst ( 1992 ) and berland and charniak ( 1999 ) in its focus on distributed representations , which it shares with turney and littman ( 2005 ) , but lra goes beyond turney and littman ( 2005 ) by finding patterns automatically . riloff and jones ( 1999 ) and yangarber ( 2003 ) also find patterns automatically , but their goal is to mine text for instances of word pairs ; the same goal as hearst ( 1992 ) and berland and charniak ( 1999 ) . because lra uses patterns to build distributed vector representations , it can exploit patterns that would be much too noisy and unreliable for the kind of text mining instance extraction that is the objective of hearst ( 1992 ) , berland and charniak ( 1999 ) , riloff and jones ( 1999 ) , and yangarber ( 2003 ) . therefore lra can simply select the highest frequency patterns ( step 4 in section 5.5 ) ; it does not need the more sophisticated selection algorithms of riloff and jones ( 1999 ) and yangarber ( 2003 ) . this section describes experiments with 600 noun-modifier pairs , hand-labeled with 30 classes of semantic relations ( nastase and szpakowicz 2003 ) . in the following experiments , lra is used with the baseline parameter values , exactly as described in section 5.5 . no adjustments were made to tune lra to the noun-modifier pairs . lra is used as a distance ( nearness ) measure in a single nearest neighbor supervised learning algorithm . the following experiments use the 600 labeled noun-modifier pairs of nastase and szpakowicz ( 2003 ) . this data set includes information about the part of speech and wordnet synset ( synonym set ; i.e. , word sense tag ) of each word , but our algorithm does not use this information . table 19 lists the 30 classes of semantic relations . the table is based on appendix a of nastase and szpakowicz ( 2003 ) , with some simplifications . the original table listed several semantic relations for which there were no instances in the data set . these were relations that are typically expressed with longer phrases ( three or more words ) , rather than noun-modifier word pairs . for clarity , we decided not to include these relations in table 19 . in this table , h represents the head noun and m represents the modifier . for example , in flu virus , the head noun ( h ) is virus and the modifier ( m ) is flu ( * ) . in english , the modifier ( typically a noun or adjective ) usually precedes the head noun . in the description of purpose , v represents an arbitrary verb . in concert hall , the hall is for presenting concerts ( v is present ) or holding concerts ( v is hold ) ( † ) . nastase and szpakowicz ( 2003 ) organized the relations into groups . the five capitalized terms in the relation column of table 19 are the names of five groups of semantic relations . ( the original table had a sixth group , but there are no examples of this group in the data set . ) we make use of this grouping in the following experiments . the following experiments use single nearest neighbor classification with leave-one-out cross-validation . for leave-one-out cross-validation , the testing set consists of a single noun-modifier pair and the training set consists of the 599 remaining noun-modifiers . the data set is split 600 times , so that each noun-modifier gets a turn as the testing word pair . the predicted class of the testing pair is the class of the single nearest neighbor in the training set . as the measure of nearness , we use lra to calculate the relational similarity between the testing pair and the training pairs . the single nearest neighbor algorithm is a supervised learning algorithm ( i.e. , it requires a training set of labeled object property obj prop sunken ship h underwent m part part printer tray h is part of m possessor posr national debt m has h property prop blue book his m product prod plum tree h produces m source src olive oil m is the source of h stative st sleeping dog h is in a state of m whole whl daisy chain m is part of h container cntr film music m contains h content cont apple cake m is contained in h equative eq player coach h is also m material mat brick house h is made of m measure meas expensive book m is a measure of h topic top weather report h is concerned with m type type oak tree m is a type of h data ) , but we are using lra to measure the distance between a pair and its potential neighbors , and lra is itself determined in an unsupervised fashion ( i.e. , lra does not need labeled data ) . each sat question has five choices , so answering 374 sat questions required calculating 374 x 5 x 16 = 29,920 cosines . the factor of 16 comes from the alternate pairs , step 11 in lra . with the noun-modifier pairs , using leave-one-out cross-validation , each test pair has 599 choices , so an exhaustive application of lra would require calculating 600 x 599 x 16 = 5,750,400 cosines . to reduce the amount of computation required , we first find the 30 nearest neighbors for each pair , ignoring the alternate pairs ( 600 x 599 = 359,400 cosines ) , and then apply the full lra , including the alternates , to just those 30 neighbors ( 600 x 30 x 16 = 288 , 000 cosines ) , which requires calculating only 359,400 + 288 , 000 = 647,400 cosines . there are 600 word pairs in the input set for lra . in step 2 , introducing alternate pairs multiplies the number of pairs by four , resulting in 2,400 pairs . in step 5 , for each pair a : b , we add b : a , yielding 4,800 pairs . however , some pairs are dropped because they correspond to zero vectors and a few words do not appear in lin ’ s thesaurus . the sparse matrix ( step 7 ) has 4,748 rows and 8,000 columns , with a density of 8.4 % . following turney and littman ( 2005 ) , we evaluate the performance by accuracy and also by the macroaveraged f measure ( lewis 1991 ) . macroaveraging calculates the precision , recall , and f for each class separately , and then calculates the average across all classes . microaveraging combines the true positive , false positive , and false negative counts for all of the classes , and then calculates precision , recall , and f from the combined counts . macroaveraging gives equal weight to all classes , but microaveraging gives more weight to larger classes . we use macroaveraging ( giving equal weight to all classes ) , because we have no reason to believe that the class sizes in the data set reflect the actual distribution of the classes in a real corpus . classification with 30 distinct classes is a hard problem . to make the task easier , we can collapse the 30 classes to 5 classes , using the grouping that is given in table 19 . for example , agent and beneficiary both collapse to participant . on the 30 class problem , lra with the single nearest neighbor algorithm achieves an accuracy of 39.8 % ( 239/600 ) and a macroaveraged f of 36.6 % . always guessing the majority class would result in an accuracy of 8.2 % ( 49/600 ) . on the 5 class problem , the accuracy is 58.0 % ( 348/600 ) and the macroaveraged f is 54.6 % . always guessing the majority class would give an accuracy of 43.3 % ( 260/600 ) . for both the 30 class and 5 class problems , lra ’ s accuracy is significantly higher than guessing the majority class , with 95 % confidence , according to the fisher exact test ( agresti 1990 ) . table 20 shows the performance of lra and vsm on the 30 class problem . vsm-av is vsm with the altavista corpus and vsm-wmts is vsm with the wmts corpus . the results for vsm-av are taken from turney and littman ( 2005 ) . all three pairwise differences in the three f measures are statistically significant at the 95 % level , according to the paired t-test ( feelders and verkooijen 1995 ) . the accuracy of lra is significantly higher than the accuracies of vsm-av and vsm-wmts , according to the fisher exact test ( agresti 1990 ) , but the difference between the two vsm accuracies is not significant . table 21 compares the performance of lra and vsm on the 5 class problem . the accuracy and f measure of lra are significantly higher than the accuracies and the experimental results in sections 6 and 7 demonstrate that lra performs significantly better than the vsm , but it is also clear that there is room for improvement . the accuracy might not yet be adequate for practical applications , although past work has shown that it is possible to adjust the trade-off of precision versus recall ( turney and littman 2005 ) . for some of the applications , such as information extraction , lra might be suitable if it is adjusted for high precision , at the expense of low recall . another limitation is speed ; it took almost 9 days for lra to answer 374 analogy questions . however , with progress in computer hardware , speed will gradually become less of a concern . also , the software has not been optimized for speed ; there are several places where the efficiency could be increased and many operations are parallelizable . it may also be possible to precompute much of the information for lra , although this would require substantial changes to the algorithm . the difference in performance between vsm-av and vsm-wmts shows that vsm is sensitive to the size of the corpus . although lra is able to surpass vsm-av when the wmts corpus is only about one tenth the size of the av corpus , it seems likely that lra would perform better with a larger corpus . the wmts corpus requires one terabyte of hard disk space , but progress in hardware will likely make 10 or even 100 terabytes affordable in the relatively near future . for noun-modifier classification , more labeled data should yield performance improvements . with 600 noun-modifier pairs and 30 classes , the average class has only 20 examples . we expect that the accuracy would improve substantially with 5 or 10 times more examples . unfortunately , it is time consuming and expensive to acquire hand-labeled data . another issue with noun-modifier classification is the choice of classification scheme for the semantic relations . the 30 classes of nastase and szpakowicz ( 2003 ) might not be the best scheme . other researchers have proposed different schemes ( vanderwende 1994 ; barker and szpakowicz 1998 ; rosario and hearst 2001 ; rosario , hearst , and fillmore 2002 ) . it seems likely that some schemes are easier for machine learning than others . for some applications , 30 classes may not be necessary ; the 5 class scheme may be sufficient . lra , like vsm , is a corpus-based approach to measuring relational similarity . past work suggests that a hybrid approach , combining multiple modules , some corpusbased , some lexicon-based , will surpass any purebred approach ( turney et al . 2003 ) . in future work , it would be natural to combine the corpus-based approach of lra with the lexicon-based approach of veale ( 2004 ) , perhaps using the combination method of turney et al . ( 2003 ) . svd is only one of many methods for handling sparse , noisy data . we have also experimented with non-negative matrix factorization ( nmf ) ( lee and seung 1999 ) , probabilistic latent semantic analysis ( plsa ) ( hofmann 1999 ) , kernel principal components analysis ( kpca ) ( scholkopf , smola , and muller 1997 ) , and iterative scaling ( is ) ( ando 2000 ) . we had some interesting results with small matrices ( around 2,000 rows by 1,000 columns ) , but none of these methods seemed substantially better than svd and none of them scaled up to the matrix sizes we are using here ( e.g. , 17,232 rows and 8,000 columns ; see section 6.1 ) . in step 4 of lra , we simply select the top num patterns most frequent patterns and discard the remaining patterns . perhaps a more sophisticated selection algorithm would improve the performance of lra . we have tried a variety of ways of selecting patterns , but it seems that the method of selection has little impact on performance . we hypothesize that the distributed vector representation is not sensitive to the selection method , but it is possible that future work will find a method that yields significant improvement in performance . this article has introduced a new method for calculating relational similarity , latent relational analysis . the experiments demonstrate that lra performs better than the vsm approach , when evaluated with sat word analogy questions and with the task of classifying noun-modifier expressions . the vsm approach represents the relation between a pair of words with a vector , in which the elements are based on the frequencies of 64 hand-built patterns in a large corpus . lra extends this approach in three ways : ( 1 ) the patterns are generated dynamically from the corpus , ( 2 ) svd is used to smooth the data , and ( 3 ) a thesaurus is used to explore variations of the word pairs . with the wmts corpus ( about 5 Γ— 1010 english words ) , lra achieves an f of 56.5 % , whereas the f of vsm is 40.3 % . we have presented several examples of the many potential applications for measures of relational similarity . just as attributional similarity measures have proven to have many practical uses , we expect that relational similarity measures will soon become widely used . gentner et al . ( 2001 ) argue that relational similarity is essential to understanding novel metaphors ( as opposed to conventional metaphors ) . many researchers have argued that metaphor is the heart of human thinking ( lakoff and johnson 1980 ; hofstadter and the fluid analogies research group 1995 ; gentner et al . 2001 ; french 2002 ) . we believe that relational similarity plays a fundamental role in the mind and therefore relational similarity measures could be crucial for artificial intelligence . in future work , we plan to investigate some potential applications for lra . it is possible that the error rate of lra is still too high for practical applications , but the fact that lra matches average human performance on sat analogy questions is encouraging . thanks to michael littman for sharing the 374 sat analogy questions and for inspiring me to tackle them . thanks to vivi nastase and stan szpakowicz for sharing their 600 classified noun-modifier phrases . thanks to egidio terra , charlie clarke , and the school of computer science of the university of waterloo , for giving us a copy of the waterloo multitext system and their terabyte corpus . thanks to dekang lin for making his dependency-based word similarity lexicon available online . thanks to doug rohde for svdlibc and michael berry for svdpack . thanks to ted pedersen for making his wordnet : :similarity package available . thanks to joel martin for comments on the article . thanks to the anonymous reviewers of computational linguistics for their very helpful comments and suggestions .
similarity of semantic relations there are at least two kinds of similarity . relational similarity is correspondence between relations , in contrast with attributional similarity , which is correspondence between attributes . when two words have a high degree of attributional similarity , we call them synonyms . when two pairs of words have a high degree of relational similarity , we say that their relations are analogous . for example , the word pair mason : stone is analogous to the pair carpenter : wood . this article introduces latent relational analysis ( lra ) , a method for measuring relational similarity . lra has potential applications in many areas , including information extraction , word sense disambiguation , and information retrieval . recently the vector space model ( vsm ) of information retrieval has been adapted to measuring relational similarity , achieving a score of 47 % on a collection of 374 college-level multiple-choice word analogy questions . in the vsm approach , the relation between a pair of words is characterized by a vector of frequencies of predefined patterns in a large corpus . lra extends the vsm approach in three ways : ( 1 ) the patterns are derived automatically from the corpus , ( 2 ) the singular value decomposition ( svd ) is used to smooth the frequency data , and ( 3 ) automatically generated synonyms are used to explore variations of the word pairs . lra achieves 56 % on the 374 analogy questions , statistically equivalent to the average human score of 57 % . on the related problem of classifying semantic relations , lra achieves similar gains over the vsm . we develop a corpus based approach to model relational similarity , addressing ( among other tasks ) the distinction between synonyms and antonyms . we describe a method ( latent relational analysis ) that extracts subsequence patterns for noun pairs from a large corpus , using query expansion to increase the recall of the search and feature selection and dimensionality reduction to reduce the complexity of the feature space .
further meta-evaluation of machine translation j schroeder ed ac uk abstract this paper analyzes the translation quality of machine translation systems for 10 language pairs translating between czech , english , french , german , hungarian , and spanish . we report the translation quality of over 30 diverse translation systems based on a large-scale manual evaluation involving hundreds of hours of effort . we use the human judgments of the systems to analyze automatic evaluation metrics for translation quality , and we report the strength of the correlation with human judgments at both the system-level and at the sentence-level . we validate our manual evaluation methodology by measuring intraand inter-annotator agreement , and collecting timing information . this paper presents the results the shared tasks of the 2008 acl workshop on statistical machine translation , which builds on two past workshops ( koehn and monz , 2006 ; callison-burch et al. , 2007 ) . there were two shared tasks this year : a translation task which evaluated translation between 10 pairs of european languages , and an evaluation task which examines automatic evaluation metrics . there were a number of differences between this year ’ s workshop and last year ’ s workshop : newspaper articles from a number of different sources . this out-of-domain test set contrasts with the in-domain europarl test set . β€’ new language pairs – we evaluated the quality of hungarian-english machine translation . hungarian is a challenging language because it is agglutinative , has many cases and verb conjugations , and has freer word order . germanspanish was our first language pair that did not include english , but was not manually evaluated since it attracted minimal participation . of rule-based mt systems , and provided their output , which were also treated as fully fledged entries in the manual evaluation . three additional groups were invited to apply their system combination algorithms to all systems . ation metrics with human judgments at the system level , we also measured how consistent they were with the human rankings of individual sentences . the remainder of this paper is organized as follows : section 2 gives an overview of the shared translation task , describing the test sets , the materials that were provided to participants , and a list of the groups who participated . section 3 describes the manual evaluation of the translations , including information about the different types of judgments that were solicited and how much data was collected . section 4 presents the results of the manual evaluation . section 5 gives an overview of the shared evaluation task , describes which automatic metrics were submitted , and tells how they were evaluated . section 6 presents the results of the evaluation task . section 7 validates the manual evaluation methodology . 2 overview of the shared translation task the shared translation task consisted of 10 language pairs : english to german , german to english , english to spanish , spanish to english , english to french , french to english , english to czech , czech to english , hungarian to english , and german to spanish . each language pair had two test sets drawn from the proceedings of the european parliament , or from newspaper articles.1 the test data for this year ’ s task differed from previous years ’ data . instead of only reserving a portion of the training data as the test set , we hired people to translate news articles that were drawn from a variety of sources during november and december of 2007 . we refer to this as the news test set . a total of 90 articles were selected , 15 each from a variety of czech- , english- , french- , german- , hungarianand spanish-language news sites:2 hungarian : napi ( 3 documents ) , index ( 2 ) , origo ( 5 ) , nΒ΄epszabadsΒ΄ag ( 2 ) , hvg ( 2 ) , uniospez ( 1 ) the translations were created by the members of euromatrix consortium who hired a mix of professional and non-professional translators . all translators were fluent or native speakers of both languages , and all translations were proofread by a native speaker of the target language . all of the translations were done directly , and not via an intermediate language . so for instance , each of the 15 hungarian articles were translated into czech , english , french , german and spanish . the total cost of creating the 6 test sets consisting of 2,051 sentences in each language was approximately 17,200 euros ( around 26,500 dollars at current exchange rates , at slightly more than 10c/word ) . having a test set that is balanced in six different source languages and translated across six languages raises some interesting questions . for instance , is it easier , when the machine translation system translates in the same direction as the human translator ? we found no conclusive evidence that shows this . what is striking , however , that the parts differ dramatically in difficulty , based on the original source language . for instance the edinburgh french-english system has a bleu score of 26.8 on the part that was originally spanish , but a score of on 9.7 on the part that was originally hungarian . for average scores for each original language , see table 1 . in order to remain consistent with previous evaluations , we also created a europarl test set . the europarl test data was again drawn from the transcripts of eu parliamentary proceedings from the fourth quarter of 2000 , which is excluded from the europarl training data . our rationale behind investing a considerable sum to create the news test set was that we believe that it more accurately represents the quality of systems ’ translations than when we simply hold out a portion of the training data as the test set , as with the europarl set . for instance , statistical systems are heavily optimized to their training data , and do not perform as well on out-of-domain data ( koehn and schroeder , 2007 ) . having both the news test set and the europarl test set allows us to contrast the performance of systems on in-domain and out-of-domain data , and provides a fairer comparison between systems trained on the europarl corpus and systems that were developed without it . to lower the barrier of entry for newcomers to the field , we provided a complete baseline mt system , along with data resources . we provided : the performance of this baseline system is similar to the best submissions in last year ’ s shared task . the training materials are described in figure 1 . we received submissions from 23 groups from 18 institutions , as listed in table 2 . we also evaluated seven additional commercial rule-based mt systems , bringing the total to 30 systems . this is a significant increase over last year ’ s shared task , where there were submissions from 15 groups from 14 institutions . of the 15 groups that participated in last year ’ s shared task , 11 groups returned this year . one of the goals of the workshop was to attract submissions from newcomers to the field , and we are please to have attracted many smaller groups , some as small as a single graduate student and her adviser . the 30 submitted systems represent a broad range of approaches to statistical machine translation . these include statistical phrase-based and rulebased ( rbmt ) systems ( which together made up the bulk of the entries ) , and also hybrid machine translation , and statistical tree-based systems . for most language pairs , we assembled a solid representation of the state of the art in machine translation . in addition to individual systems being entered , this year we also solicited a number of entries which combined the results of other systems . we invited researchers at bbn , carnegie mellon university , and the university of edinburgh to apply their system combination algorithms to all of the systems submitted to shared translation task . we designated the translations of the europarl set as the development data for combination techniques which weight each system.3 cmu combined the french-english systems , bbn combined the french-english and german-english systems , and edinburgh submitted combinations for the french-english and germanenglish systems as well as a multi-source system combination which combined all systems which translated from any language pair into english for the news test set . the university of saarland also produced a system combination over six commercial rbmt systems ( eisele et al. , 2008 ) . saarland graciously provided the output of these systems , which we manually evaluated alongside all other entries . for more on the participating systems , please refer to the respective system descriptions in the proceedings of the workshop . as with last year ’ s workshop , we placed greater emphasis on the human evaluation than on the automatic evaluation metric scores . it is our contention that automatic measures are an imperfect substitute for human assessment of translation quality . therefore , rather than select an official automatic evaluation metric like the nist machine translation workshop does ( przybocki and peterson , 2008 ) , we define the manual evaluation to be primary , and use 3since the performance of systems varied significantly between the europarl and news test sets , such weighting might not be optimal . however this was a level playing field , since none of the individual systems had development data for the news set either . europarl corpus and from the project syndicate , a web site which collects political commentary in multiple languages . for czech and hungarian we use other available parallel corpora . note that the number of words is computed based on the provided tokenizer and that the number of distinct words is the based on lowercased tokens . the human judgments to validate automatic metrics . manual evaluation is time consuming , and it requires a monumental effort to conduct it on the scale of our workshop . we distributed the workload across a number of people , including shared task participants , interested volunteers , and a small number of paid annotators . more than 100 people participated in the manual evaluation , with 75 people putting in more than an hour ’ s worth of effort , and 25 putting in more than four hours . a collective total of 266 hours of labor was invested . we wanted to ensure that we were using our annotators ’ time effectively , so we carefully designed the manual evaluation process . in our analysis of last year ’ s manual evaluation we found that the niststyle fluency and adequacy scores ( ldc , 2005 ) were overly time consuming and inconsistent.4 we therefore abandoned this method of evaluating the translations . we asked people to evaluate the systems ’ output in three different ways : the manual evaluation software asked for repeated judgments from the same individual , and had multiple people judge the same item , and logged the time it took to complete each judgment . this allowed us to measure intra- and inter-annotator agreement , and to analyze the average amount of time it takes to collect the different kinds of judgments . our analysis is presented in section 7 . ranking translations relative to each other is a relatively intuitive and straightforward task . we therefore kept the instructions simple . the instructions for this task were : 4it took 26 seconds on average to assign fluency and adequacy scores to a single sentence , and the inter-annotator agreement had a kappa of between .225–.25 , meaning that annotators assigned the same scores to identical sentences less than 40 % of the time . rank each whole sentence translation from best to worst relative to the other choices ( ties are allowed ) . ranking several translations at a time is a variant of force choice judgments where a pair of systems is presented and an annotator is asked β€œ is a better than b , worse than b , or equal to b. ” in our experiments , annotators were shown five translations at a time , except for the hungarian and czech language pairs where there were fewer than five system submissions . in most cases there were more than 5 systems submissions . we did not attempt to get a complete ordering over the systems , and instead relied on random selection and a reasonably large sample size to make the comparisons fair . we continued the constituent-based evaluation that we piloted last year , wherein we solicited judgments about the translations of short phrases within sentences rather than whole sentences . we parsed the source language sentence , selected syntactic constituents from the tree , and had people judge the translations of those syntactic phrases . in order to draw judges ’ attention to these regions , we highlighted the selected source phrases and the corresponding phrases in the translations . the corresponding phrases in the translations were located via automatic word alignments . figure 2 illustrates how the source and reference phrases are highlighted via automatic word alignments . the same is done for sentence and each of the system translations . the english , french , german and spanish test sets were automatically parsed using high quality parsers for those languages ( bikel , 2002 ; arun and keller , 2005 ; dubey , 2005 ; bick , 2006 ) . the word alignments were created with giza++ ( och and ney , 2003 ) applied to a parallel corpus containing the complete europarl training data , plus sets of 4,051 sentence pairs created by pairing the test sentences with the reference translations , and the test sentences paired with each of the system translations . the phrases in the translations were located using standard phrase extraction techniques ( koehn et al. , 2003 ) . because the word-alignments were created automatically , and because the phrase extraction is heuristic , the phrases that were selected may not exactly correspond to the translations of the selected source phrase . we noted this in the instructions to judges : rank each constituent translation from best to worst relative to the other choices ( ties are allowed ) . grade only the highlighted part of each translation . please note that segments are selected automatically , and they should be taken as an approximate guide . they might include extra words that are not in the actual alignment , or miss words on either end . 76 the criteria that we used to select which constituents to evaluate were : the final criterion helped reduce the number of alignment errors , but may have biased the sample to phrases that are more easily aligned . this year we introduced a variant on the constituentbased evaluation , where instead of asking judges to rank the translations of phrases relative to each other , we asked them to indicate which phrasal translations were acceptable and which were not . decide if the highlighted part of each translation is acceptable , given the reference . this should not be a relative judgment against the other system translations . the instructions also contained the same caveat about the automatic alignments as above . for each phrase the judges could click on β€œ yes ” , β€œ no ” , or β€œ not sure. ” the number of times people clicked on β€œ not sure ” varied by language pair and task . it was selected as few as 5 % of the time for the englishspanish news task to as many as 12.5 % for the czech-english news task . we collected judgments using a web-based tool that presented judges with batches of each type of evaluation . we presented them with five screens of sentence rankings , ten screens of constituent rankings , and ten screen of yes/no judgments . the order of the types of evaluation were randomized . in order to measure intra-annotator agreement 10 % of the items were repeated and evaluated twice by each judge . in order to measure inter-annotator agreement 40 % of the items were randomly drawn from a common pool that was shared across all annotators so that we would have items that were judged by multiple annotators . judges were allowed to select whichever data set they wanted , and to evaluate translations into whatever languages they were proficient in . shared task participants were excluded from judging their own systems . in addition to evaluation each language pair individually , we also combined all system translations into english for the news test set , taking advantage of the fact that our test sets were parallel across all languages . this allowed us to gather interesting data about the difficulty of translating from different languages into english . table 3 gives a summary of the number of judgments that we collected for translations of individual sentences . we evaluated 14 translation tasks with three different types of judgments for most of them , for a total of 46 different conditions . in total we collected over 75,000 judgments . despite the large number of conditions we managed to collect between 1,000–2,000 judgments for the constituentbased evaluation , and several hundred to several thousand judgments for the sentence ranking tasks . tables 4 , 5 , and 6 summarize the results of the human evaluation of the quality of the machine translation systems . table 4 gives the results for the manual evaluation which ranked the translations of sentences . it shows the average number of times that systems were judged to be better than or equal to any other system . table 5 similarly summarizes the results for the manual evaluation which ranked the translations of syntactic constituents . table 6 shows how many times on average a system ’ s translated constituents were judged to be acceptable in the yes/no evaluation . the bolded items indicate the system that performed the best for each task under that particular evaluate metric . table 7 summaries the results for the all-english task that we introduced this year . appendix c gives an extremely detailed pairwise comparison between each of the systems , along with an indication of whether the differences are statistically significant . the highest ranking entry for the all-english task was the university of edinburgh ’ s system combination entry . it uses a technique similar to rosti et al . ( 2007 ) to perform system combination . like the other system combination entrants , it was tuned on the europarl test set and tested on the news test set , using systems that submitted entries to both tasks . the university of edinburgh ’ s system combination went beyond other approaches by combining output from multiple languages pairs ( frenchenglish , german-english and spanish-english ) , resulting in 37 component systems . rather than weighting individual systems , it incorporated weighted features that indicated which language the system was originally translating from . this entry was part of ongoing research in multi-lingual , multisource translation . since there was no official multilingual system combination track , this entry should be viewed only as a contrastive data point . we analyzed the all-english judgments to see which source languages were preferred more often , thinking that this might be a good indication of how challenging it is for current mt systems to translate from each of the languages into english . for this analysis we collapsed all of the entries derived from one source language into an equivalence class , and judged them against the others . therefore , all french systems were judged against all german systems , and so on . we found that french systems were judged to be better than or equal to other systems 69 % of the time , spanish systems 64 % of the time , german systems 47 % of the time , czech systems 39 % of the time , and hungarian systems 29 % of the time . we performed a similar analysis by collapsing the rbmt systems into one equivalence class , and the other systems into another . we evaluated how well these two classes did on the sentence ranking task for each language pair and test set , and found that rbmt was a surprisingly good approach in many of the conditions . rbmt generally did better on the news test set and for translations into german , suggesting that smt ’ s forte is in test sets where it has appropriate tuning data and for language pairs with less reordering than between german and english . system was judged to be better than or equal to all other systems in the sentence ranking task for the all-english condition . the subscript indicates the source language of the system . the manual evaluation data provides a rich source of information beyond simply analyzing the quality of translations produced by different systems . in particular , it is especially useful for validating the automatic metrics which are frequently used by the machine translation research community . we continued the shared task which we debuted last year , by examining how well various automatic metrics correlate with human judgments . in addition to examining how well the automatic evaluation metrics predict human judgments at the system-level , this year we have also started to measure their ability to predict sentence-level judgments . the automatic metrics that were evaluated in this year ’ s shared task were the following : some of the allowable variation in translation . we use a single reference translation in our experiments . words . they calculate bleu ( posbleu ) and f-measure ( pos4gramfmeasure ) by matching part of speech 4grams in a hypothesis translation against the reference translation . in addition to the above metrics , which scored the translations on both the system-level5 and the sentence-level , there were a number of metrics which focused on the sentence-level : system translations ( svm-rank ) . features included in duh ( 2008 ) ’ s training were sentencelevel bleu scores and intra-set ranks computed from the entire set of translations . β€’ usaar ’ s evaluation metric ( alignment-prob ) uses giza++ to align outputs of multiple systems with the corresponding reference translations , with a bias towards identical one-to-one alignments through a suitably augmented corpus . the model4 log probabilities in both directions are added and normalized to a scale between 0 and 1 . to measure the correlation of the automatic metrics with the human judgments of translation quality at the system-level we used spearman ’ s rank correlation coefficient p. we converted the raw scores assigned each system into ranks . we assigned a ranking to the systems for each of the three types of manual evaluation based on : β€’ the percent of time that the sentences it produced were judged to be better than or equal to the translations of any other system . β€’ the percent of time that its constituent translations were judged to be better than or equal to the translations of any other system . β€’ the percent of time that its constituent translations were judged to be acceptable . we calculated p three times for each automatic metric , comparing it to each type of human evaluation . since there were no ties p can be calculated using the simplified equation : where di is the difference between the rank for systemi and n is the number of systems . the possible values of p range between 1 ( where all systems are ranked in the same order ) and βˆ’1 ( where the systems are ranked in the reverse order ) . thus an automatic evaluation metric with a higher value for p is making predictions that are more similar to the human judgments than an automatic evaluation metric with a lower p. measuring sentence-level correlation under our human evaluation framework was made complicated by the fact that we abandoned the fluency and adequacy judgments which are intended to be absolute scales . some previous work has focused on developing automatic metrics which predict human ranking at the sentence-level ( kulesza and shieber , 2004 ; albrecht and hwa , 2007a ; albrecht and hwa , 2007b ) . such work generally used the 5-point fluency and adequacy scales to combine the translations of all sentences into a single ranked list . this list could be compared against the scores assigned by automatic metrics and used to calculate correlation coefficients . we did not gather any absolute scores and thus can not compare translations across different sentences . given the seemingly unreliable fluency and adequacy assignments that people make even for translations of the same sentences , it may be dubious to assume that their scoring will be reliable across sentences . the data points that we have available consist of a set of 6,400 human judgments each ranking the output of 5 systems . it ’ s straightforward to construct a ranking of each of those 5 systems using the scores automatic evaluation metrics on translations into french , german and spanish assigned to their translations of that sentence by the automatic evaluation metrics . when the automatic scores have been retrieved , we have 6,400 pairs of ranked lists containing 5 items . how best to treat these is an open discussion , and certainly warrants further thought . it does not seem like a good idea to calculate p for each pair of ranked list , because 5 items is an insufficient number to get a reliable correlation coefficient and its unclear if averaging over all 6,400 lists would make sense . furthermore , many of the human judgments of 5 contained ties , further complicating matters . therefore rather than calculating a correlation coefficient at the sentence-level we instead ascertained how consistent the automatic metrics were with the human judgments . the way that we calculated consistency was the following : for every pairwise comparison of two systems on a single sentence by a person , we counted the automatic metric as being consistent if the relative scores were the same ( i.e . the metric assigned a higher score to the higher ranked system ) . we divided this by the total number of pairwise comparisons to get a percentage . because the systems generally assign real numbers as scores , we excluded pairs that the human annotators ranked as ties . tables 8 and 9 report the system-level p for each automatic evaluation metric , averaged over all translations directions into english and out of english6 for the into english direction the meteor score with its parameters tuned on adequacy judgments had the strongest correlation with ranking the translations of whole sentences . it was tied with the combined method of gimenez and marquez ( 2008 ) for the highest correlation over all three types of human judgments . bleu was the second to lowest ranked overall , though this may have been due in part to the fact that we were using test sets which had only a single reference translation , since the cost of creating multiple references was prohibitively expensive ( see section 2.1 ) . in the reverse direction , for translations out of english into the other languages , bleu does considerably better , placing second overall after the part-ofspeech variant on it proposed by popovic and ney ( 2007 ) . yet another variant of bleu which utilizes meteor ’ s flexible matching has the strongest correlation for sentence-level ranking . appendix b gives a break down of the correlations for each of the lan6tables 8 and 9 exclude the spanish-english news task , since it had a negative correlation with most of the automatic metrics . see tables 19 and 20. guage pairs and test sets . tables 10 and 11 report the consistency of the automatic evaluation metrics with human judgments on a sentence-by-sentence basis , rather than on the system level . for the translations into english the ulc metric ( which itself combines many other metrics ) had the strongest correlation with human judgments , correctly predicting the human ranking of a each pair of system translations of a sentence more than half the time . this is dramatically higher than the chance baseline , which is not .5 , since it must correctly rank a list of systems rather than a pair . for the reverse direction meteor-ranking performs very strongly . the svn-rank which had the lowest overall correlation at the system level does the best at consistently predicting the translations of syntactic constituents into other languages . in addition to scoring the shared task entries , we also continued on our campaign for improving the process of manual evaluation . we measured pairwise agreement among annotators using the kappa coefficient ( k ) which is widely used in computational linguistics for measuring agreement in category judgments ( carletta , 1996 ) . it is defined as annotator agreement for the different types of manual evaluation where p ( a ) is the proportion of times that the annotators agree , and p ( e ) is the proportion of time that they would agree by chance . we define chance agreement for ranking tasks as s since there are three possible outcomes when ranking the output of a pair of systems : a > b , a = b , a < b , and for the yes/no judgments as 2 since we ignored those items marked β€œ not sure ” . for inter-annotator agreement we calculated p ( a ) for the yes/no judgments by examining all items that were annotated by two or more annotators , and calculating the proportion of time they assigned identical scores to the same items . for the ranking tasks we calculated p ( a ) by examining all pairs of systems which had been judged by two or more judges , and calculated the proportion of time that they agreed that a > b , a = b , or a < b . for intra-annotator agreement we did similarly , but gathered items that were annotated on multiple occasions by a single annotator . table 12 gives k values for inter-annotator agreement , and table 13 gives k values for intraannotator agreement . these give an indication of how often different judges agree , and how often single judges are consistent for repeated judgments , re spectively . the interpretation of kappa varies , but according to landis and koch ( 1977 ) , 0βˆ’.2 is slight , .2 βˆ’.4 is fair , .4 βˆ’.6 is moderate , .6 βˆ’.8 is substantial and the rest almost perfect . the inter-annotator agreement for the sentence ranking task was fair , for the constituent ranking it was moderate and for the yes/no judgments it was substantial.7 for the intraannotator agreement k indicated that people had moderate consistency with their previous judgments on the sentence ranking task , substantial consistency with their previous constituent ranking judgments , and nearly perfect consistency with their previous yes/no judgments . these k values indicate that people are able to more reliably make simple yes/no judgments about the translations of short phrases than they are to rank phrases or whole sentences . while this is an interesting observation , we do not recommend doing away with the sentence ranking judgments . the higher agreement on the constituent-based evaluation may be influenced based on the selection criteria for which phrases were selected for evaluation ( see section 3.2 ) . additionally , the judgments of the short phrases are not a great substitute for sentence-level rankings , at least in the way we collected them . the average correlation coefficient between the constituent-based judgments with the sentence ranking judgments is only p = 0.51 . tables 19 and 20 give a detailed break down of the correlation of the different types of human judgments with each other on each translation task . it may be possible to select phrases in such a way that the constituent-based evaluations are a better substitute for the sentence-based ranking , for instance by selecting more of constituents from each sentence , or attempting to cover most of the words in each sentence in a phrase-by-phrase manner . this warrants further investigation . it might also be worthwhile to refine the instructions given to annotators about how to rank the translations of sentences to try to improve their agreement , which is currently lower than we would like it to be ( although it is substantially better than the previous fluency and adequacy scores , 7note that for the constituent-based evaluations we verified that the high k was not trivially due to identical phrasal translations . we excluded screens where all five phrasal translations presented to the annotator were identical , and report both numbers . which had a k < .25 in last year ’ s evaluation ) . we used the web interface to collect timing information . the server recorded the time when a set of sentences was given to a judge and the time when the judge returned the sentences . it took annotators an average of 18 seconds per sentence to rank a list of sentences.8 it took an average of 10 seconds per sentence for them to rank constituents , and an average of 8.5 seconds per sentence for them to make yes/no judgments . figure 3 shows the distribution of times for these tasks . these timing figures indicate that the tasks which the annotators were the most reliable on ( yes/no judgments and constituent ranking ) were also much quicker to complete than the ones they were less reliable on ( ranking sentences ) . given that they are faster at judging short phrases , they can do proportionally more of them . for instance , we could collect 211 yes/no judgments in the same amount of time that it would take us to collect 100 sentence ranking judgments . however , this is partially offset by the fact that many of the translations of shorter phrases are identical , which means that we have to collect more judgments in order to distinguish between two systems . 8sets which took longer than 5 minutes were excluded from these calculations , because there was a strong chance that annotators were interrupted while completing the task . one strong advantage of the yes/no judgments over the ranking judgments is their potential for reuse . we have invested hundreds of hours worth of effort evaluating the output of the translation systems submitted to this year ’ s workshop and last year ’ s workshop . while the judgments that we collected provide a wealth of information for developing automatic evaluation metrics , we can not not re-use them to evaluate our translation systems after we update their parameters or change their behavior in anyway . the reason for this is that altered systems will produce different translations than the ones that we have judged , so our relative rankings of sentences will no longer be applicable . however , the translations of short phrases are more likely to be repeated than the translations of whole sentences . therefore if we collect a large number of yes/no judgments for short phrases , we could build up a database that contains information about what fragmentary translations are acceptable for each sentence in our test corpus . when we change our system and want to evaluate it , we do not need to manually evaluate those segments that match against the database , and could instead have people evaluate only those phrasal translations which are new . accumulating these judgments over time would give a very reliable idea of what alternative translations were allowable . this would be useful because it could alleviate the problems associated with bleu failing to recognize allowable variation in translation when multiple reference translations are not available ( callison-burch et al. , 2006 ) . a large database of human judgments might also be useful as an objective function for minimum error rate training ( och , 2003 ) or in other system development tasks . similar to previous editions of this workshop we carried out an extensive manual and automatic evaluation of machine translation performance for translating from european languages into english , and vice versa . one important aspect in which this year ’ s shared task differed from previous years was the introduction of an additional newswire test set that was different in nature to the training data . we also added new language pairs to our evaluation : hungarian-english and german-spanish . as in previous years we were pleased to notice an increase in the number of participants . this year we received submissions from 23 groups from 18 institutions . in addition , we evaluated seven commercial rule-based mt systems . the goal of this shared-task is two-fold : first we want to compare state-of-the-art machine translation systems , and secondly we aim to measure to what extent different evaluation metrics can be used to assess mt quality . with respect to mt quality we noticed that the introduction of test sets from a different domain did have an impact on the ranking of systems . we observed that rule-based systems generally did better on the news test set . overall , it can not be concluded that one approach clearly outperforms other approaches , as systems performed differently on the various translation tasks . one general observation is that for the tasks where statistical combination approaches participated , they tended to score relatively high , in particular with respect to bleu . with respect to measuring the correlation between automated evaluation metrics and human judgments we found that using meteor and ulch ( which utilizes a variety of metrics , including meteor ) resulted in the highest spearman correlation scores on average , when translating into english . when translating from english into french , german , and spanish , bleu and posbleu resulted in the highest correlations with human judgments . finally , we investigated inter- and intra-annotator agreement of human judgments using kappa coefficients . we noticed that ranking whole sentences results in relatively low kappa coefficients , meaning that there is only fair agreement between the assessors . constituent ranking and acceptability judgments on the other hand show moderate and substantial inter-annotator agreement , respectively . intraannotator agreement was substantial to almost perfect , except for the sentence ranking assessment where agreement was only moderate . although it is difficult to draw exact conclusions from this , one might wonder whether the sentence ranking task is simply too complex , involving too many aspects according to which translations can be ranked . the huge wealth of the data generated by this workshop , including the human judgments , system translations and automatic scores , is available at http : //www.statmt.org/wmt08/ for other researchers to analyze . this work was supported in parts by the euromatrix project funded by the european commission ( 6th framework programme ) , the gale program of the us defense advanced research projects agency , contract no . hr0011-06-c-0022 , and the us national science foundation under grant iis-0713448 . we are grateful to abhaya agarwal , john henderson , rebecca hwa , alon lavie , mark przybocki , stuart shieber , and david smith for discussing different possibilities for calculating the sentence-level correlation of automatic evaluation metrics with human judgments in absence of absolute scores . any errors in design remain the responsibility of the authors . thank you to eckhard bick for parsing the spanish test set . see http : //beta.visl.sdu.dk for more information about the constraint-based parser . thanks to greg hanneman and antti-veikko rosti for applying their system combination algorithms to our data .
further meta-evaluation of machine translation this paper analyzes the translation quality of machine translation systems for 10 language pairs translating between czech , english , french , german , hungarian , and spanish . we report the translation quality of over 30 diverse translation systems based on a large-scale manual evaluation involving hundreds of hours of effort . we use the human judgments of the systems to analyze automatic evaluation metrics for translation quality , and we report the strength of the correlation with human judgments at both the system-level and at the sentence-level . we validate our manual evaluation methodology by measuring intra- and inter-annotator agreement , and collecting timing information . thus , the human an notation for the wmt 2008 dataset was collected in the form of binary pairwise preferences that are considerably easier to make . traditionally , human ratings for mt quality have been collected in the form of absolute scores on a five or seven-point likert scale , but low reliability numbers for this type of annotation have raised concerns .
soft syntactic constraints for hierarchical phrased-based translation in adding syntax to statistical mt , there is a tradeoff between taking advantage of linguistic analysis , versus allowing the model to exploit linguistically unmotivated mappings learned from parallel training data . a number of previous efforts have tackled this tradeoff by starting with a commitment to linguistically motivated analyses and then finding appropriate ways to soften that commitment . we present an approach that explores the tradeoff from the other direction , starting with a context-free translation model learned directly from aligned parallel text , and then adding soft constituent-level constraints based on parses of the source language . we obtain substantial improvements in performance for translation from chinese and arabic to english . the statistical revolution in machine translation , beginning with ( brown et al. , 1993 ) in the early 1990s , replaced an earlier era of detailed language analysis with automatic learning of shallow source-target mappings from large parallel corpora . over the last several years , however , the pendulum has begun to swing back in the other direction , with researchers exploring a variety of statistical models that take advantage of source- and particularly target-language syntactic analysis ( e.g . ( cowan et al. , 2006 ; zollmann and venugopal , 2006 ; marcu et al. , 2006 ; galley et al. , 2006 ) and numerous others ) . chiang ( 2005 ) distinguishes statistical mt approaches that are β€œ syntactic ” in a formal sense , going beyond the finite-state underpinnings of phrasebased models , from approaches that are syntactic in a linguistic sense , i.e . taking advantage of a priori language knowledge in the form of annotations derived from human linguistic analysis or treebanking . ' the two forms of syntactic modeling are doubly dissociable : current research frameworks include systems that are finite state but informed by linguistic annotation prior to training ( e.g. , ( koehn and hoang , 2007 ; birch et al. , 2007 ; hassan et al. , 2007 ) ) , and also include systems employing contextfree models trained on parallel text without benefit of any prior linguistic analysis ( e.g . ( chiang , 2005 ; chiang , 2007 ; wu , 1997 ) ) . over time , however , there has been increasing movement in the direction of systems that are syntactic in both the formal and linguistic senses . in any such system , there is a natural tension between taking advantage of the linguistic analysis , versus allowing the model to use linguistically unmotivated mappings learned from parallel training data . the tradeoff often involves starting with a system that exploits rich linguistic representations and relaxing some part of it . for example , deneefe et al . ( 2007 ) begin with a tree-to-string model , using treebank-based target language analysis , and find it useful to modify it in order to accommodate useful β€œ phrasal ” chunks that are present in parallel training data but not licensed by linguistically motivated parses of the target language . similarly , cowan et al . ( 2006 ) focus on using syntactically rich representations of source and target parse trees , but they resort to phrase-based translation for modifiers within clauses . finding the right way to balance linguistic analysis with unconstrained data-driven modeling is clearly a key challenge . in this paper we address this challenge from a less explored direction . rather than starting with a system based on linguistically motivated parse trees , we begin with a model that is syntactic only in the formal sense . we then introduce soft constraints that take source-language parses into account to a limited extent . introducing syntactic constraints in this restricted way allows us to take maximal advantage of what can be learned from parallel training data , while effectively factoring in key aspects of linguistically motivated analysis . as a result , we obtain substantial improvements in performance for both chinese-english and arabic-english translation . in section 2 , we briefly review the hiero statistical mt framework ( chiang , 2005 , 2007 ) , upon which this work builds , and we discuss chiang ’ s initial effort to incorporate soft source-language constituency constraints for chinese-english translation . in section 3 , we suggest that an insufficiently fine-grained view of constituency constraints was responsible for chiang ’ s lack of strong results , and introduce finer grained constraints into the model . section 4 demonstrates the the value of these constraints via substantial improvements in chineseenglish translation performance , and extends the approach to arabic-english . section 5 discusses the results , and section 6 considers related work . finally we conclude in section 7 with a summary and potential directions for future work . hiero ( chiang , 2005 ; chiang , 2007 ) is a hierarchical phrase-based statistical mt framework that generalizes phrase-based models by permitting phrases with gaps . formally , hiero ’ s translation model is a weighted synchronous contextfree grammar . hiero employs a generalization of the standard non-hierarchical phrase extraction approach in order to acquire the synchronous rules of the grammar directly from word-aligned parallel text rules have the form x β†’ he , 1i , where e and f are phrases containing terminal symbols ( words ) and possibly co-indexed instances of the nonterminal symbol x.2 associated with each rule is a set of translation model features , oi ( οΏ½f , e ) ; for example , one intuitively natural feature of a rule is the phrase translation ( log- ) probability o ( f , e ) _ log p ( e |f ) , directly analogous to the corresponding feature in non-hierarchical phrase-based models like pharaoh ( koehn et al. , 2003 ) . in addition to this phrase translation probability feature , hiero ’ s feature set includes the inverse phrase translation probability log p ( οΏ½f|e ) , lexical weights lexwt ( οΏ½f|e ) and lexwt ( e |οΏ½f ) , which are estimates of translation quality based on word-level correspondences ( koehn et al. , 2003 ) , and a rule penalty allowing the model to learn a preference for longer or shorter derivations ; see ( chiang , 2007 ) for details . these features are combined using a log-linear model , with each synchronous rule contributing to the total log-probability of a derived hypothesis . each ai is a weight associated with feature oi , and these weights are typically optimized using minimum error rate training ( och , 2003 ) . when looking at hiero rules , which are acquired automatically by the model from parallel text , it is easy to find many cases that seem to respect linguistically motivated boundaries . for example , seems to capture the use of jingtian/this year as a temporal modifier when building linguistic constituents such as noun phrases ( the election this year ) or verb phrases ( voted in the primary this year ) . however , it is important to observe that nothing in the hiero framework actually requires nonterminal symbols to cover linguistically sensible constituents , and in practice they frequently do not.3 chiang ( 2005 ) conjectured that there might be value in allowing the hiero model to favor hypotheses for which the synchronous derivation respects linguistically motivated source-language constituency boundaries , as identified using a parser . he tested this conjecture by adding a soft constraint in the form of a β€œ constituency feature ” : if a synchronous rule x β€” * ( e , f ) is used in a derivation , and the span of f is a constituent in the sourcelanguage parse , then a term a , is added to the model score in expression ( 1 ) .4 unlike a hard constraint , which would simply prevent the application of rules violating syntactic boundaries , using the feature to introduce a soft constraint allows the model to boost the β€œ goodness ” for a rule if it is constitent with the source language constituency analysis , and to leave its score unchanged otherwise . the weight a , , like all other az , is set via minimum error rate training , and that optimization process determines empirically the extent to which the constituency feature should be trusted . figure 1 illustrates the way the constituency feature worked , treating english as the source language for the sake of readability . in this example , a , would be added to the hypothesis score for any rule used in the hypothesis whose source side spanned the minister , a speech , yesterday , gave a speech yesterday , or the minister gave a speech yesterday . a rule translating , say , minister gave a as a unit would receive no such boost . chiang tested the constituency feature for chinese-english translation , and obtained no significant improvement on the test set . the idea then seems essentially to have been abandoned ; it does not appear in later discussions ( chiang , 2007 ) . on the face of it , there are any number of possible reasons chiang ’ s ( 2005 ) soft constraint did not work – including , for example , practical issues like the quality of the chinese parses.5 however , we focus here on two conceptual issues underlying his use of source language syntactic constituents . first , the constituency feature treats all syntactic constituent types equally , making no distinction among them . for any given language pair , however , there might be some source constituents that tend to map naturally to the target language as units , and others that do not ( fox , 2002 ; eisner , 2003 ) . moreover , a parser may tend to be more accurate for some constituents than for others . second , the chiang ( 2005 ) constituency feature gives a rule additional credit when the rule ’ s source side overlaps exactly with a source-side syntactic constituent . logically , however , it might make sense not just to give a rule x β€” * ( e , f ) extra credit when f matches a constituent , but to incur a cost when f violates a constituent boundary . using the example in figure 1 , we might want to penalize hypotheses containing rules where f is the minister gave a ( and other cases , such as minister gave , minister gave a , and so forth ) .6 these observations suggest a finer-grained approach to the constituency feature idea , retaining the idea of soft constraints , but applying them using various soft-constraint constituency features . our first observation argues for distinguishing among constituent types ( np , vp , etc . ) . our second observation argues for distinguishing the benefit of match6this accomplishes coverage of the logically complete set of possibilities , which include not only f matching a constituent exactly or crossing its boundaries , but also f being properly contained within the constituent span , properly containing it , or being outside it entirely . whenever these latter possibilities occur , f will exactly match or cross the boundaries of some other constituent . ing constituents from the cost of crossing constituent boundaries . we therefore define a space of new features as the cross product { cp , ip , np , vp , ... } x { _ , + } . where = and + signify matching and crossing boundaries , respectively . for example , onp= would denote a binary feature that matches whenever the span of f exactly covers an np in the source-side parse tree , resulting in anp= being added to the hypothesis score ( expression ( 1 ) ) . similarly , ovp+ would denote a binary feature that matches whenever the span of f crosses a vp boundary in the parse tree , resulting in avp+ being subtracted from the hypothesis score.7 for readability from this point forward , we will omit 0 from the notation and refer to features such as np= ( which one could read as β€œ np match ” ) , vp+ ( which one could read as β€œ vp crossing ” ) , etc . in addition to these individual features , we define three more variants : β€’ for each constituent type , e.g . np , we define a feature np_ that ties the weights of np= and np+ . if np= matches a rule , the model score is incremented by anp_ , and if np+ matches , the model score is decremented by the same quantity . β€’ for each constituent type , e.g . np , we define a version of the model , np2 , in which np= and np+ are both included as features , with separate weights anp= and anp+ . β€’ we define a set of β€œ standard ” linguistic labels containing { cp , ip , np , vp , pp , adjp , advp , qp , lcp , dnp } and excluding other labels such as prn ( parentheses ) , frag ( fragment ) , etc.8 we define feature xp= as the disjunction of { cp= , ip= , ... , dnp= } ; i.e . its value equals 1 for a rule if the span of f exactly covers a constituent having any of the standard labels . the 7formally , avp+ simply contributes to the sum in expression ( 1 ) , as with all features in the model , but weight optimization using minimum error rate training should , and does , automatically assign this feature a negative weight . 8we map sbar and s labels in arabic parses to cp and ip , respectively , consistent with the chinese parses . we map chinese dp labels to np . dnp and lcp appear only in chinese . we ran no adjp experiment in chinese , because this label virtually aways spans only one token in the chinese parses . definitions of xp+ , xp_ , and xp2 are analogous . feature can be viewed as a disjunctive β€œ alllabels= ” feature , we also defined β€œ all-labels+ ” , β€œ all-labels2 ” , and β€œ all-labels_ ” analogously . we carried out mt experiments for translation from chinese to english and from arabic to english , using a descendant of chiang ’ s hiero system . language models were built using the sri language modeling toolkit ( stolcke , 2002 ) with modified kneser-ney smoothing ( chen and goodman , 1998 ) . word-level alignments were obtained using giza++ ( och and ney , 2000 ) . the baseline model in both languages used the feature set described in section 2 ; for the chinese baseline we also included a rule-based number translation feature ( chiang , 2007 ) . in order to compute syntactic features , we analyzed source sentences using state of the art , tree-bank trained constituency parsers ( ( huang et al. , 2008 ) for chinese , and the stanford parser v.2007-08-19 for arabic ( klein and manning , 2003a ; klein and manning , 2003b ) ) . in addition to the baseline condition , and baseline plus chiang ’ s ( 2005 ) original constituency feature , experimental conditions augmented the baseline with additional features as described in section 3 . all models were optimized and tested using the bleu metric ( papineni et al. , 2002 ) with the nistimplemented ( β€œ shortest ” ) effective reference length , on lowercased , tokenized outputs/references . statistical significance of difference from the baseline bleu score was measured by using paired bootstrap re-sampling ( koehn , 2004 ) .9 for the chinese-english translation experiments , we trained the translation model on the corpora in table 1 , totalling approximately 2.1 million sentence pairs after giza++ filtering for length ratio . chinese text was segmented using the stanford segmenter ( tseng et al. , 2005 ) . we trained a 5-gram language model using the english ( target ) side of the training set , pruning 4gram and 5-gram singletons . for minimum error rate training and development we used the nist mteval mt03 set . table 2 presents our results . we first evaluated translation performance using the nist mt06 ( nisttext ) set . like chiang ( 2005 ) , we find that the original , undifferentiated constituency feature ( chiang05 ) introduces a negligible , statistically insignificant improvement over the baseline . however , we find that several of the finer-grained constraints ( ip= , vp= , vp+ , qp+ , and np= ) achieve statistically significant improvements over baseline ( up to .74 bleu ) , and the latter three also improve significantly on the undifferentiated constituency feature . by combining multiple finer-grained syntactic features , we obtain significant improvements of up to 1.65 bleu points ( np_ , vp2 , ip2 , all-labels_ , and xp+ ) . we also obtained further gains using combinations of features that had performed well ; e.g. , condition ip2.vp2.np_ augments the baseline features with ip2 and vp2 ( i.e . ip= , ip+ , vp= and vp+ ) , and np_ ( tying weights of np= and np+ ; see section 3 ) . since component features in those combinations were informed by individual-feature performance on the test set , we tested the best performing conditions from mt06 on a new test set , nist mt08 . np= and vp+ yielded significant improvements of up to 1.53 bleu . combination conditions replicated the pattern of results from mt06 , including the same increasing order of gains , with improvements up to 1.11 bleu . for arabic-english translation , we used the training corpora in table 3 , approximately 100,000 sentence pairs after giza++ length-ratio filtering . we trained a trigram language model using the english side of this training set , plus the english gigaword v2 afp and gigaword v1 xinhua corpora . development and minimum error rate training were done using the nist mt02 set . table 4 presents our results . we first tested on on the nist mt03 and mt06 ( nist-text ) sets . on mt03 , the original , undifferentiated constituency feature did not improve over baseline . two individual finer-grained features ( pp+ and advp= ) yielded statistically significant gains up to .42 bleu points , and feature combinations ap2 , xp2 and all-labels2 yielded significant gains up to 1.03 bleu points . xp2 and all-labels2 also improved significantly on the undifferentiated constituency feature , by .72 and 1.11 bleu points , respectively . for mt06 , chiang ’ s original feature improved the baseline significantly β€” this is a new result using his feature , since he did not experiment with arabic β€” as did our our ip= , pp= , and vp= conditions . adding individual features pp+ and advp= yielded significant improvements up to 1.4 bleu points over baseline , and in fact the improvement for individual feature advp= over chiang ’ s undifferentiated constituency feature approaches significance ( p < .075 ) . more important , several conditions combining features achieved statistically significant improvements over baseline of up 1.94 bleu points : xp2 , ip2 , ip , vp=.pp+.advp= , ap2 , pp+.advp= , and advp2 . of these , advp2 is also a significant improvement over the undifferentiated constituency feature ( chiang-05 ) , with p < .01 . as we did for chinese , we tested the best-performing models on a new test set , nist mt08 . consistent patterns reappeared : improvements over the baseline up to 1.69 bleu ( p < .01 ) , with advp2 again in the lead ( also outperforming the undifferentiated constituency feature , p < .05 ) . ( p < .05 ) . * * : better than baseline ( p < .01 ) . + : better than chiang-05 ( p < .05 ) . ++ : better than chiang-05 ( p < .01 ) . - : almost significantly better than chiang-05 ( p < .075 ) the results in section 4 demonstrate , to our knowledge for the first time , that significant and sometimes substantial gains over baseline can be obtained by incorporating soft syntactic constraints into hiero ’ s translation model . within language , we also see considerable consistency across multiple test sets , in terms of which constraints tend to help most . furthermore , our results provide some insight into why the original approach may have failed to yield a positive outcome . for chinese , we found that when we defined finer-grained versions of the exact-match features , there was value for some constituency types in biasing the model to favor matching the source language parse . moreover , we found that there was significant value in allowing the model to be sensitive to violations ( crossing boundaries ) of source parses . these results confirm that parser quality was not the limitation in the original work ( or at least not the only limitation ) , since in our experiments the parser was held constant . looking at combinations of new features , some β€œ double-feature ” combinations ( vp2 , ip2 ) achieved large gains , although note that more is not necessarily better : combinations of more features did not yield better scores , and some did not yield any gain at all . no conflated feature reached significance , but it is not the case that all conflated features are worse than their same-constituent β€œ double-feature ” counterparts . we found no simple correlation between finer-grained feature scores ( and/or boundary condition type ) and combination or conflation scores . since some combinations seem to cancel individual contributions , we can conclude that the higher the number of participant features ( of the kinds described here ) , the more likely a cancellation effect is ; therefore , a β€œ double-feature ” combination is more likely to yield higher gains than a combination containing more features . we also investigated whether non-canonical linguistic constituency labels such as prn , frag , ucp and vsb introduce β€œ noise ” , by means of the xp features β€” the xp= feature is , in fact , simply the undifferentiated constituency feature , but sensitive only to β€œ standard ” xps . although performance of xp= , xp2 and all-labels+ were similar to that of the undifferentiated constituency feature , xp+ achieved the highest gain . intuitively , this seems plausible : the feature says , at least for chinese , that a translation hypothesis should incur a penalty if it is translating a substring as a unit when that substring is not a canonical source constituent . having obtained positive results with chinese , we explored the extent to which the approach might improve translation using a very different source language . the approach on arabic-english translation yielded large bleu gains over baseline , as well as significant improvements over the undifferentiated constituency feature . comparing the two sets of experiments , we see that there are definitely language-specific variations in the value of syntactic constraints ; for example , advp , the top performer in arabic , can not possibly perform well for chinese , since in our parses the advp constituents rarely include more than a single word . at the same time , some ip and vp variants seem to do generally well in both languages . this makes sense , since β€” at least for these language pairs and perhaps more generally β€” clauses and verb phrases seem to correspond often on the source and target side . we found it more surprising that no np variant yielded much gain in arabic ; this question will be taken up in future work . space limitations preclude a thorough review of work attempting to navigate the tradeoff between using language analyzers and exploiting unconstrained data-driven modeling , although the recent literature is full of variety and promising approaches . we limit ourselves here to several approaches that seem most closely related . among approaches using parser-based syntactic models , several researchers have attempted to reduce the strictness of syntactic constraints in order to better exploit shallow correspondences in parallel training data . our introduction has already briefly noted cowan et al . ( 2006 ) , who relax parse-tree-based alignment to permit alignment of non-constituent subphrases on the source side , and translate modifiers using a separate phrase-based model , and deneefe et al . ( 2007 ) , who modify syntax-based extraction and binarize trees ( following ( wang et al. , 2007b ) ) to improve phrasal coverage . similarly , marcu et al . ( 2006 ) relax their syntax-based system by rewriting target-side parse trees on the fly in order to avoid the loss of β€œ nonsyntactifiable ” phrase pairs . setiawan et al . ( 2007 ) employ a β€œ function-word centered syntax-based approach ” , with synchronous cfg and extended itg models for reordering phrases , and relax syntactic constraints by only using a small number function words ( approximated by high-frequency words ) to guide the phrase-order inversion . zollman and venugopal ( 2006 ) start with a target language parser and use it to provide constraints on the extraction of hierarchical phrase pairs . unlike hiero , their translation model uses a full range of named nonterminal symbols in the synchronous grammar . as an alternative way to relax strict parser-based constituency requirements , they explore the use of phrases spanning generalized , categorial-style constituents in the parse tree , e.g . type np/nn denotes a phrase like the great that lacks only a head noun ( say , wall ) in order to comprise an np . in addition , various researchers have explored the use of hard linguistic constraints on the source side , e.g . via β€œ chunking ” noun phrases and translating them separately ( owczarzak et al. , 2006 ) , or by performing hard reorderings of source parse trees in order to more closely approximate target-language word order ( wang et al. , 2007a ; collins et al. , 2005 ) . finally , another soft-constraint approach that can also be viewed as coming from the data-driven side , adding syntax , is taken by riezler and maxwell ( 2006 ) . they use lfg dependency trees on both source and target sides , and relax syntactic constraints by adding a β€œ fragment grammar ” for unparsable chunks . they decode using pharaoh , augmented with their own log-linear features ( such as p ( esnippet|fsnippet ) and its converse ) , side by side to β€œ traditional ” lexical weights . riezler and maxwell ( 2006 ) do not achieve higher bleu scores , but do score better according to human grammaticality judgments for in-coverage cases . when hierarchical phrase-based translation was introduced by chiang ( 2005 ) , it represented a new and successful way to incorporate syntax into statistical mt , allowing the model to exploit non-local dependencies and lexically sensitive reordering without requiring linguistically motivated parsing of either the source or target language . an approach to incorporating parser-based constituents in the model was explored briefly , treating syntactic constituency as a soft constraint , with negative results . in this paper , we returned to the idea of linguistically motivated soft constraints , and we demonstrated that they can , in fact , lead to substantial improvements in translation performance when integrated into the hiero framework . we accomplished this using constraints that not only distinguish among constituent types , but which also distinguish between the benefit of matching the source parse bracketing , versus the cost of using phrases that cross relevant bracketing boundaries . we demonstrated improvements for chineseenglish translation , and succeed in obtaining substantial gains for arabic-english translation , as well . our results contribute to a growing body of work on combining monolingually based , linguistically motivated syntactic analysis with translation models that are closely tied to observable parallel training data . consistent with other researchers , we find that β€œ syntactic constituency ” may be too coarse a notion by itself ; rather , there is value in taking a finergrained approach , and in allowing the model to decide how far to trust each element of the syntactic analysis as part of the system ’ s optimization process . this work was supported in part by darpa prime agreement hr0011-06-2-0001 . the authors would like to thank david chiang and adam lopez for making their source code available ; the stanford parser team and mary harper for making their parsers available ; david chiang , amy weinberg , and clip laboratory colleagues , particularly chris dyer , adam lopez , and smaranda muresan , for discussion and invaluable assistance .
soft syntactic constraints for hierarchical phrased-based translation in adding syntax to statistical mt , there is a tradeoff between taking advantage of linguistic analysis , versus allowing the model to exploit linguistically unmotivated mappings learned from parallel training data . a number of previous efforts have tackled this tradeoff by starting with a commitment to linguistically motivated analyses and then finding appropriate ways to soften that commitment . we present an approach that explores the tradeoff from the other direction , starting with a context-free translation model learned directly from aligned parallel text , and then adding soft constituent-level constraints based on parses of the source language . we obtain substantial improvements in performance for translation from chinese and arabic to english . we revise this method by distinguishing different constituent syntactic types , and defined features for each type to count whether a phrase matches or crosses the syntactic boundary . we find that their constituent constraints are sensitive to language pairs .
a unification method for disjunctive feature descriptions although disjunction has been used in several unificationbased grammar formalisms , existing methods of unification have been unsatisfactory for descriptions containing large quantities of disjunction , because they require exponential time . this paper describes a method of unification by successive approximation , resulting in better average performance . disjunction has been used in several unification-based grammar formalisms to represent alternative structures in descriptions of constituents . disjunction is an essential component of grammatical descriptions in kay 's functional unification grammar [ 6 ] , and it has been proposed by karttunen as a linguistically motivated extension to patr-ii [ 2 ] . in previous work two methods have been used to handle disjunctive descriptions in parsing and other computational applications . the first method requires expanding descriptions to disjunctive normal form ( dnf ) so that the entire description can be interpreted as a set of structures , each of which contains no disjunction . this method is exemplified by definite clause grammar [ 8 ] , which eliminates disjunctive terms by expanding each rule containing disjunction into alternative rules . it is also the method used by kay [ 7 ] in parsing fug . this method works reasonably well for small grammars , but it is clearly unsatisfactory for descriptions containing more than a small number of disjunctions , because the dnf expansion requires an amount of space which is exponential in the number of disjunctions . the second method , developed by karttunen [ 2 ] , uses constraints on disjuncts which must be checked whenever a disjunct is modified . karttunen 's method is only applicable to value disjunctions ( i.e . those disjunctions used to specify the value of a single feature ) , and it becomes complicated and inefficient when disjuncts contain non-local dependencies ( i.e . values specified by path expressions denoting another feature ) . in previous research [ 4,5 ] we have shown how descriptions of feature structures can be represented by a certain type of logical formula , and that the consistency problem for disjunctive descriptions is np-complete . this result indicates , according to the widely accepted mathematical assumption that p np , that any complete unification algorithm for disjunctive descriptions will require exponential time in the worst case . however , this result does not preclude algorithms with better average performance , such as the method described in the remainder of this paper . this method overcomes the shortcomings of previously existing methods , and has the following desirable properties : the most common unification methods for non-disjunctive feature structures use a directed graph ( dg ) representation , in which arcs are labeled by names of features , and nodes correspond to values of features . for an introduction to these methods , the reader is referred to shieber 's survey [ 11 ] . in the remainder of this section we will define a data structure for disjunctive descriptions , using dg structures as a basic component . in the following exposition , we will carefully observe the distinction between feature structures and their descriptions , as explained in [ 4 ] . feature structures will be represented by dgs , and descriptions of feature structures will be represented by logical formulas of the type described in [ 4 ] . the nil denoting no information ; top denoting inconsistent information ; a where a e a , to describe atomic values ; i : where i e l and e fdl , to describe structures in which the feature labeled by i has a value described by 0 ; [ < pi > , β€’β€’β€’ , < pes > i where each pi e l * , to describe an equivalence class of paths sharing a common value in a feature structure ; where 0 , e fdl ; syntax for formulas of this feature description logic ( hereafter called fdl ) is given in figure 1.1 note , in particular , that disjunction is used in descriptions of feature structures , but not in the structures themselves . as we have shown ( see [ 91 ) that there is a unique minimal satisfying dg structure for any nondisjunctive fdl formula , we can represent the parts of a formula which do not contain any disjunction by dgs . dgs are a more compact way of representing the same information that is contained in a fdl formula , provided the formula contains no disjunction . let us define an unconditional conjunct to be a conjunct of a formula which contains no occurrences of disjunction . after path expansion any formula can be put into the form : where uconj contains no occurrences of disjunction , and each disji , for 1 < i < m , is a disjunction of two or more alternatives . the uconj part of the formula is formed by using the commutative law to bring all unconditional conjuncts of the formula together at the front . of course , there may be no unconditional conjuncts in a formula , in which case uconj would be the formula nil . each disjunct may be any type of formula , so disjuncts can also be put into a similar form , with all unconditional conjuncts grouped together before all disjunctive components . thus the disjunctions of a formula can be put into the form ( uconji adi sj 11 a ... adisji . ) v ... v ( uconj . adi a jy , a ... a diejny ) . the embedding of conjuncts within disjuncts is preserved , but the order of conjuncts may be changed . the unconditional conjuncts of a formula contain information that is more definite than the information contained in disjunctions . thus a formula can be regarded as having a definite part , containing only unconditional conjuncts , and an indefinite part , containing a set of disjunctions . the definite part contains no disjunction , and therefore it may be represented by a dg structure . to encode these parts of a formula , let us define a feature-description as a type of data structure , having two components : ilet a and l be sets of symbols which are used to denote atomic values and feature labels , respectively . indefinite : a set of disjunctions , where each disjunction is a set of feature-descriptions . it is possible to convert any fdl formula into a featuredescription structure by a simple automatic procedure , as described in [ 51 . this conversion does not add or subtract any information from a formula , nor increase its size in any significant way . it simply identifies components of the formula which may be converted into a more efficient representation as dg structures . a feature-description is conceptually equivalent to a special kind of and/or graph , in which the terminal nodes are represented by dg structures . for example , an and/or graph equivalent to the formula , is shown in figure 2 . in the and/or graph representation , each and-node represents a feature-description . the first outgoing arc from an and-node represents the definite component of a feature-description , and the remaining outgoing arcs represent the indefinite component . each or-node represents a disjunction . function unify-desc ( f , g ) returns feature-description : where f and g are feature-descriptions . / . unify definite components . let new-def = unify-dgs ( f.definite , g.definite ) . if new-def = top , then return ( failure ) . let desc = a feature-description with : desc.definite = new-def , desc.indefinite = tindefinite u g.indefinite . if desc.indefinite = 0 , then return ( desc ) ; else begin ; 2 . check compatibility of indefinite components with new-def . let new-desc = check-indef ( desc , new-def ) . if new-desc = failure , then return ( failure ) ; s. complete ezhau.stiue consistency checking , if necessary . else if new-desc.indefinite = 0 or if complete checking is not required , then return ( new-desc ) ; else begin ; let n = 1 . repeat while n < cardinality of new-desc.indefinite : in this section we will give a complete algorithm for unifying two feature-descriptions , where one or both may contain disjunction . this algorithm is designed so that it can be used as a relatively efficient approximation method , with an optional step to perform complete consistency checking when necessary . given two feature-descriptions , the strategy of the unification algorithm is to unify the definite components of the descriptions first , and examine the compatibility of indefinite components later . disjuncts are eliminated from the description when they are inconsistent with definite information . this strategy avoids exploring disjuncts more than once when they are inconsistent with definite information . the exact algorithm is described in figure 3 . it has three major steps . in the first step , the definite components of the two descriptions are unified together , producing a dg structure , new-def , which represents the definite information of the result . this step can be performed by existing unification algorithms for dgs . in the second step , the indefinite components of both descriptions are checked for compatibility with new-def , using the function check-indef , which is defined in figure 4 . check-indef uses the function check-disj , defined in figure 5 , to check the compatibility of each disjunction with the dg structure given by the parameter cond . the compatibility of two dgs can be checked by almost the same procedure as unification , but the two structures being checked are not actually merged as they are in unification . in the third major step , if any disjunctions remain , and it is necessary to do so , disjuncts of different disjunctions are considered in groups , to check whether they are compatible together . this step is performed by the function nwiseconsistency , defined in figure 6 . when the parameter n to nwise-consistency has the value 1 , then one disjunct is checked for compatibility with all other disjunctions of the description in a pairwise manner . the pairwise manner of checking compatibility can be generalized to groups of any size by increasing the value of the parameter n. while this third step of the algorithm is necessary in order to insure consistency of disjunctive descriptions , it is not necessary to use it every time a description is built during a parse . in practice , we find that the performance of the algorithm can be tuned by using this step only at strategic points during a parse , since it is the most inefficient step of the alfunction check-indef ( desc , cond ) returns feature-description : where desc is a feature-description , and cond is a dg . let indef = desc.indefinite ( a set of disjunctions ) . let new-def = desc.definite ( a dg ) . let unchecked-parts = true . function check-disj ( disj , cond ) returns disjunction : where disj is a disjunction of feature-descriptions , and cond is a dg . gorithm . in our application , using the earley chart parsing method , it has proved best to use nwise-consistency only when building descriptions for complete edges , but not when building descriptions for active edges . note that two feature-descriptions do not become permanently linked when they are unified , unlike unification for dg stuctures . the result of unifying two descriptions is a new description , which is satisfied by the intersection of the sets of structures that satisfy the two given descriptions . the new description contains all the information that is contained in either of the given descriptions , subtracting any disjuncts which are no longer compatible . in order to illustrate the effect of each step of the algorithm , let us consider an example of unifying the description of a known constituent with the description of a portion of a grammar . this exemplifies the predominant type of structure building operation needed in a parsing program for functional unification grammar . the example given here is deliberately simple , in order to illustrate how the algorithm works with a minimum amount of detail . it is not intended as an example of a linguistically motivated grammar . descriptions have been unified , and their indefinite components have been conjoined together . in step 2 of the algorithm each of the disjuncts of des c.indefinite is checked for compatibility with desc.definite , using the function check-indef . in this case , all disjuncts are compatible with the definite information , except for one ; the disjunct of the third disjunction which contains the feature number : sing . this disjunct is eliminated , and the only remaining disjunct in the disjunction ( i.e. , the disjunct containing number : po is unified with desc.definite . the result after this step is shown in figure 9 . the four disjuncts that remain are numbered for convenience . in step 3 , nwise-consistency is used with 1 as the value of the parameter n. a new description is hypothesized by unifying disjunct ( 1 ) with the definite component of the description ( i.e. , new-desc.definite ) . then disjuncts ( 3 ) and ( 4 ) are checked for compatibility with this hypothesized structure : ( 3 ) is not compatible , because the values of the transitivity features do not unify . disjunct ( 4 ) is also incompatible , because it has goal : person : 3 , and the hypothesized description has ri < subj > , < goal > 1 , along with subj : person : 2 . therefore , since there is no compatible disjunct among ( 3 ) and ( 4 ) , the hypothesis that ( 1 ) is compatible with the rest of the description has been shown to be invalid , and ( 1 ) can be eliminated . it follows that disjunct ( 2 ) should be unified with the definite part of the description . now disjuncts ( 3 ) and ( 4 ) are checked for compatibility with the definite component of the new description : ( 3 ) is no longer compatible , but ( 4 ) is compatible . therefore , ( 3 ) is eliminated , and ( 4 ) is unified with the definite information . no disjunctions remain in the result , as shown in figure 10 . referring to figure 3 , note that the function unify-desc may terminate after any of the three major steps . after each step it may detect inconsistency between the two descriptions and terminate , returning failure , or it may terminate because no disjunctions remain in the description . therefore , it is useful to examine the complexity of each of the three steps independently . let n represent the total number of symbols in the combined description f a g , and d represent the total number of disjuncts ( in both top-level and embedded disjunctions ) contained in f a g. step / . this step performs the unification of two dg structures . ait-kaci [ 1 ] has shown how this operation can be performed in almost linear time by the union/find algorithm . its time complexity has an upper bound of 0 ( n log n ) . since an unknown amount of a description may be contained in the definite component , this step of the algorithm also requires 0 ( n log it ) time . step 2 . for this step we examine the complexity of the function check-indef . there are two nested loops in check-indef , each of which may be executed at most once for each disjunct in the description . the inner loop checks the compatibility of two dg structures , which requires no more time than unification . thus , in the worst case , checklndef requires 0 ( cen log n ) time . step 3 . nwise-consistency requires at most 0 ( 2d/2 ) time . in this step , nwise-consistency is called at most ( d/2 ) β€” 1 times . therefore , the overall complexity of step 3 is 0 ( 2d/2 ) . discussion . while the worst case complexity of the entire algorithm is 0 ( 2d ) , an exponential , it is significant that it often terminates before step 3 , even when a large number of disjunctions are present in one of the descriptions . thus , in many practical cases the actual cost of the algorithm is bounded by a polynomial that is at most d2n log it . since d must be less than n , this complexity function is almost cubic . even when step 3 must be used , the number of remaining disjunctions is often much fewer than d/2 , so the exponent is usually a small number . the algorithm performs well in most cases , because the three steps are ordered in increasing complexity , and the number of disjunctions can only decrease during unification . the algorithm presented in the previous sections has been implemented and tested as part of a general parsing method for systemic functional grammar , which is described in [ 3 ] . the algorithm was integrated with the structure building module of the patr-ii system [ 101 , written in the zetalisp programming language . while the feature-description corresponding to a grammar may have hundreds of disjunctions , the descriptions that result from parsing a sentence usually have only a small number of disjunctions , if any at all . most disjunctions in a systemic grammar represent possible alternative values that some particular feature may have ( along with the grammatical consequences entailed by choosing particular values for the feature ) . in the analysis of a particular sentence most features have a unique value , and some features are not present at all . when disjunction remains in the description of a sentence after parsing , it usually represents ambiguity or an underspecified part of the grammar . with this implementation of the algorithm , sentences of up to 10 words have been parsed correctly , using a grammar which contains over 300 disjunctions . the time required for most sentences is in the range of 10 to 300 seconds , running on lisp machine hardware . the fact that sentences can be parsed at all with a grammar containing this many disjunctions indicates that the algorithm is performing much better than its theoretical worst case time of 0 ( 2d ) .2 the timings , shown in table 1 , obtained from the experimental parser for systemic grammar also indicate that a dramatic increase in the number of disjunctions in the grammar does not result in an exponential increase in parse time . c08 is a grammar containing 98 disjunctions , 2consider , 23 & quot ; i : . 28Β° , and 280 is taken to be a rough estimate of the number of particles in the universe . and g440 is a grammar containing 440 disjunctions . the total time used to parse each sentence is given in seconds . the unification method presented here represents a general solution to a seemingly intractable problem . this method has been used successfully in an experimental parser for a grammar containing several hundred disjunctions in its description . therefore , we expect that it can be used as the basis for language processing systems requiring large grammatical descriptions that contain disjunctive information , and refined as necessary and appropriate for specific applications . while the range of speed achieved by a straightforward implementation of this algorithm is acceptable for grammar testing , even greater efficiency would be desirable ( and necessary for applications demanding fast real-time performance ) . therefore , we suggest two types of refinement to this algorithm as topics for future research : using heuristics to determine an opportune ordering of the disjuncts within a description , and using parallel hardware to implement the compatibility tests for different disjunctions . i would like to thank bill rounds , my advisor during graduate studies at the university of michigan , for his helpful criticism of earlier versions of the algorithm which is presented here . i would also like to thank bill mann for suggestions during its implementation at usc/isi , and stuart shieber for providing help in the use of the patr-ii system . this research was sponsored in part by the united states air force office of scientific research contracts fq8671-8401007 and f49620-87-c-0005 , and in part by the united states defense advanced research projects agency under contract mda903-81-c-0335 ; the opinions expressed here are solely those of the author .
a unification method for disjunctive feature descriptions although disjunction has been used in several unification-based grammar formalisms , existing methods of unification have been unsatisfactory for descriptions containing large quantities of disjunction , because they require exponential time . this paper describes a method of unification by successive approximation , resulting in better average performance . the general problem of unifying two disjunctive feature structures is non-polynomial in the number of disjunctions . we present a technique which , for every set of n conjoined disjunctions , checks the consistency first of single disjuncts against the definite part of the description , then that of pairs and so on up ton-tuples for full consistency .